Running Exchange Scripts as a Scheduled Task

Quite often I run into times when I want to automate a task in Exchange.  I usually use Task Scheduler to schedule any PowerShell script I want to run, but it can sometimes be a bit tricky.  There are many ways to do this and here is a couple of ways which work well for me.

I’m assuming you’ve got your Exchange script written and have tested it running from the Exchange Management Shell.  If it has issues running in the Exchange Management Shell, fix those issues first before you schedule the script.

In Task Scheduled, Right click and Create a New Task.

schtask1
Enter details in the name and description to match your script and ensure you have “Run wheather user is logged on or not” is selected and “Run with highest privileges” is ticked.

schtask2

Go to the Triggers Tab and Click the New Button.

schtask3

In here, we can setup a trigger based on a scheduled time.  There are other triggers available as well, but we’re just scheduling the script to be run at a certain time so we’ll be sticking to the trigger “On a schedule”.

schtask4

Fill in the time/s you want to run the script and press OK

Note: You create multiple triggers in case you want to run you script more frequently or on different schedules changing the time the script executes.
schtask5

Go to the Actions Tab and Click the New Button

schtask6

In this area we will enter the details to run the script.

During my testing, I’ve found two ways which this can be done.

The 1st method below is more convenient, but it does spawn additional PowerShell processes.  If you schedule a lot of PowerShell scripts you may want to take this into account for any overheads on the host you’re running the scripts on.

The 2nd method runs scripts with the Exchange environment, but it’s usually necessary to have extra PowerShell in your script to connect to Exchange.

Both methods assume that you’re script will run in PowerShell versions 1-4 and that you know the version and install path of Exchange on your the host running this script.

1st Method

In the Program/script: text box we need to enter the full path to PowerShell on this host.

By default, it is:

You can check this is correct by pressing the Browse button and selecting it.

In the Add arguments (optional): text box we need to enter arguments specific to your script.

For example, this is what would be used for Exchange 2013 installed on Drive C.

If you are using Exchange 2010, it would be slightly different.

Note: in the above examples, you’ll need to replace C:\Scripts\MyScript.ps1 with the full path to where the script is stored on this host.

Once you’ve entered this, press OK

2nd Method

In the Program/script: text box we need to enter the full path to PowerShell on this host.

By default, it is:

You can check this is correct by pressing the Browse button and selecting it.

In the Add arguments (optional): text box we need to enter arguments specific to your script.

For example, this is what would be used for Exchange 2013 installed on Drive C.  If you are using Exchange 2010, you’ll need to update the path to the PSConsoleFile to the Exchange 2010 version.

 

Note: in the above examples, you’ll need to replace C:\Scripts\MyScript.ps1 with the full path to where the script is stored on this host.

Once you’ve entered this, press OK

The press OK again to create the scheduled task.

When you do this, you will be prompted for credentials.  Make sure that you use credentials which have sufficient Exchange privileges to run the cmdlets used in your script.

Note: Consider running scripts from a server or workstation with the Exchange Management Tools installed which is not providing Exchange services (Not an Exchange server).  This will help protect the security of your Exchange servers by not performing tasks as the local administrator.

Tips

Tip #1

Check the installation path for Exchange on the host.  If it was installed on Drive D, the arguments in the scheduled task would look like this.

Tip #2

Use logging in your scripts and don’t rely on the scheduled task logs.  The scheduled task logs simply tell you if task started/completed properly and don’t tell you anything about how your script ran.  Using transcript in your script can be a quick way to add logging. You just add

to the start of your script and

to the end of your script.

Check out more info about start-transcript here.

Sending Notifications to Teams using PowerShell

I thought that as so many of us are now using teams, why not send more information to a channel with results from automated PowerShell scripts.

We can do this via a webhook.

So lets setup a webhook on the Teams Channel.  We’ll need to do this via a connector.

Open the Channel and click the More Options button (which appears as three dots at the top right of the window) and Select Connectors.

Teams-Scripts-02

Scroll down to Incoming Webhook and click the Add button.

Teams-Scripts-03

Give the connector a name and image of your choosing and finally click Create.

Teams-Scripts-04

A new unique URI is automatically generated. Copy this URI string to your clipboard.

Teams-Scripts-05

Note: These details are available if you lose the url in the future.  Select the Connector and then click the Manage button to you can copy the URI string again.

Now we just want some PowerShell code to insert into the webhook, so this needs to be in JSON format.

This worked great.

Teams-Scripts-07a

But, I’d like to add more details to the notifications, like script identification, errors, etc that the scripts encountered. Message cards looked good, so I decided to try a basic message card.

Message cards can do a lot more than notification too. If you’re interested you can check out more about message cards in the references below.

Teams-Scripts-07b

I personally find this format is much better as a Teams notification as it could contain potentially a lot of information from scripts.

Reference

Message Cards
https://docs.microsoft.com/en-us/outlook/actionable-messages/card-reference
https://messagecardplayground.azurewebsites.net/

Home Assistant and MQTT

After getting Home Assistant up and running, the next thing I wanted to do was to add MQTT so I could connect sensors. I decided to use mosquitto for MQTT.

First to install mosquitto server, client and python mosquitto packages.

Now that it’s installed, lets set it up. First lets create the directory where it will keep it’s persistence db files, not forgetting to change the directory owner to the mosquitto user.

Now lets update the configuration file. Below is what I’ve got in mine.

sudo nano /etc/mosquitto/mosquitto.conf

Now to add some usernames/passwords.

This is how you’ll create the passwd file with the first user.

After that, you add more users without the -c parameter, like this.

Now lets restart mosquitto.

After the service has restarted, verify that mosquitto has started

Alright, now it’s up and running, lets give it a test.  I tested mine connecting to my raspberry pi with two SSH sessions, one to test subscribing to messages and one to test sending messages. You’ll need to update the IP Address, port number and username/password to suit you.

Subscribe to messages with topic test_mqtt

Send a message to the topic test_mqtt

You should see the “Test Message” message arrive in your SSH session running the mosquitto subscribe.

Now lets add to Home Assistant

We now need to add some additional configuration to the Home Assistant configuration file for MQTT.

You would add similar to the following, but customise it to IP Address and port number you are running mosquitto on (from the mosquitto configuration file) and a valid username/password.

And then restart Home Assistant

This will get mosquitto up and running. You can now use MQTT with Home Assistant and send/receive message to MQTT sensors and clients.

Quickly retrieving data from Office 365 mailboxes 

Often we need to perform searches over many mailboxes to find just those few which match certain criteria.

One that I recently had was to find users which have a forward on their mailbox (set on the mailbox attribute) in Office 365.

One option I had was I could run:

This will work.  However, if you’ve got a lot of users (e.g. over 100,000 users) it can be very slow.  But why?

As Office 365 is a shared platform, Microsoft throttle PowerShell commands.  They throttle PowerShell in a number of ways.  In this instance, by the amount of data that is sent back to your local PowerShell session.

But if I’ve only got 5 matching users out of thousands, surely that can’t be much data.. why is it throttled?

This is where it becomes important to know where the work for the PowerShell command is being done and when the data is being transferred.  As you are connecting to Office 365, some parts of the PowerShell command execute on remote server (Office 365 in our case), some is executed locally where you lauched PowerShell from and they pass data between them.

In the above example, the get-mailbox -resultsize unlimited retrieves that data from every mailbox.  This is done on Office 365 remote server and then it is all sent back to your local PowerShell session.  Once it arrives at your local PowerShell, it will execute the where ($_.ForwardingSmtpAddress -like '*') on the data.  The data is being transferred between the remote Office 365 and local server contains all the mailbox data, and of course due to the size of the data it gets throttled and slows down the command, taking longer to get the results.

We can speed this up!

The where executes locally, so we really want this to be executed on the remote server, decreasing the amount of data transferred back to your PowerShell session.  So how do we do that?  We can filter the results of the get-mailbox command using the -filter parameter and the command will only return what matches the filter and it performs it all on the remote server.

So we could change this example

to

 

Prove it..

Unfortunately due to way these commands work, we can’t use -resultsize parameter to prove that these commands are faster. If we did use the -resultssize parameter e.g. -resultsize 100, the first example the command would only search the first 100 mailboxes while the updated version above would search mailboxes and return only first 100 forward results.

So in order to show the results, I’ll run this on a tenancy which contains over 300,000 mailboxes.  The original PowerShell command took over 2.5 hours while the new command took just over 10 minutes.

PS_Forwarding

As you can see, the larger the number of mailboxes being searched the more benefit using -filter will give you.

If you have a small amount of users in your tenancy, then you probably won’t see very much difference in using the -filter parameter, however if you’re company expands or your scripts are used on larger Office 365 tenancies, using -filter may help your scripts get quicker results.

Setting up Home Assistant on the Raspberry Pi

Lately I’ve been playing with Home Assistant (open source) on my Raspberry Pi for Home Automation.  I was surprised on the amount of support that is currently available and how flexible and easy to setup it is.  If you haven’t looked at Home Assistant yet, you can check it out here.

I’ve mainly been using a Raspberry Pi 3, but I have also tested Home Assistant on a Raspberry Pi 2 and it ran very well with no issues.

Below is some instructions for setting up Home Assistant.  These are my notes, but hopefully you might find them useful too.

Firstly, go to the Home Assistant site and download the image of Hassbian.  Grab the etcher software too for writing the image to the SD card.  After you’ve written the image to the SD card, put it in the Raspberry Pi and start it up.

NOTE: The Hassbian instructions say to wait about 5 minutes, mine took between 5-10 minutes.  During this time, Home Assistant may detect devices/sensors on the network it is connected.  It may automatically find some of your devices. e.g. it automatically found my Chromecast.

Setup the Raspberry Pi

Once Hassbian is up, SSH in using the pi user – remember the default password is raspberry.  Once in, first thing I did was setup the Raspberry Pi.  This is pretty much the same as you would do if it was running raspian.

Change the passwd for the pi user.

Then configure the Raspberry Pi settings.

I update the timezone, locale, wifi locale and expand the filesystem (so I have use of the full SD card).  Then reboot – raspi-config usually prompts you to.

After the reboot, update and upgrade the packages installed on the Raspberry Pi.

Note: This normally takes quite a while.

I usually do another reboot after the updates and upgrade just to make sure everything is running on the updated versions with no issues.

Configure Home Assistant

To configure the Home Assistant, you’ll need need to edit the Home Assistant configuration file.  In Hassbian, the Home Assistant configuration files are located in  /home/homeassistant/.homeassistant.

The Home Assistant site has resources on this.  You can check them out here.

Now lets update the Home Assistant configuration file.

It probably look something like this:

I updated latitude, longitude, elevation, unit_system and time_zone.

If you are having trouble determining the location for latitude, longitude, google maps can help you find them.  If you click on your location on the map, the location details are usually at the bottom of the browser window.  For more info, the google help page on this is here.

When updating this configuration file, it can be fussy sometimes. It’s a good idea to validate the configuration changes before restarting Home Assistant to use configuration. You can do this in GUI very easily. You can get to the GUI by going to the hostname or IP Address of you Raspberry Pi on port 8123. Something like http://172.16.13.13:8123/.

HomeAssistantConfigurationWebPage1.jpg

Once you’ve made changes to the configuration file, you’ll need to restart Home Assistant.

Security and Certificates

I would also recommend that you password protect your Home Assistant.  It’s good practice even if you aren’t exposing it directly to the Internet.  To do this, update the http: section of the configuration.yaml file and add api_password: PASSWORD

It should look like this:

You’ll need to restart Home Assistant for this to take effect.

Going to the Home Assistant web gui after this change will prompt for a password.

HomeAssistantPasswordPrompt

You should note however that this option still transmits the password insecurely over HTTP.  You’ll need to add certificates if you want it securely transfer your traffic.

IMPORTANT NOTE: If your Rapsberry Pi has a private IP Address (e.g. 192.168.0.5) and get a certificate for a domain with a public IP address, you browser will give you a warning that the Raspberry PI is not trusted if you go to it from your private network.  If you go to it from the Internet, you will not get this message.

If you decide you want a certificate, Let’s Encrypt provide a fantastic service where you can get free certificates.  You need a domain for this as the Let’s encrypt certificates require that you can prove ownership of your domain.  The easiest way to prove this is to port forward ports 80 & 443 temporarily to your raspberry pi while you run the script which sets up, verifies and obtain a certificate.

To get the certificates, after you’ve put the port forwards in place, you run the following commands to get obtain certificate.  Make sure you update the email address and hostname to suit that of your raspberry pi.

Note: These scripts will install python if it’s not already installed.

This will take a while.  Once this has completed, it should provide you with information about the certificate you just obtained, similar to below:

Now, to use the certificate you just got in Home Assistant, we’ll need to edit the configuration.yaml file again.

Make sure you update the path of the key and certificate to match your domain.

Once you’ve made changes to the configuration file, you’ll need to restart Home Assistant.

As the certificates from Lets Encrypt expire in 90 days, it’s important to renew the certificate.

The previous script secures the certifcates for only the root user, so we’ll first need to update the permissions.

Once this has been done, add the auto-renewal setup into cron.

(select nano, its easiest)

This crontab will attempt to renew the certificate on a daily basis, but you could go weekly or monthly if you prefer.

Troubleshooting

If you have any issues with your Home Assistant, (maybe a typo in the configuration file) checking the home assistant log file can give you information on what is wrong.

That’s it.  Home Assistant is now configured.  The next step is to attach devices/sensors and add automation.

AD Connect Filtering

If you’ve installed Azure AD Connect to sync objects from your local Active Directory to Office 365, you may have seen that you can use filtering to stop objects being sync.  Yeah Yeah I hear you say, you can filter objects by the OU they’re in.. Yes you can, but you can also filter by attributes on objects, which as you can imagine can be very handy.

Check out the below link for some Microsoft doco on how to do this.

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-configure-filtering#attribute-based-filtering

The filtering examples in the above link can be used to filter in/out users from being sync’d to Office 365 Azure AD from your local AD. It uses the Extension Attributes (on the user objects) to perform the filtering. Once you AD Connect has been setup to do this filtering, the below PowerShell examples can be used to populate the relevant ExtensionAttribute with values which will be filtered to stop users being synced to Office 365.

The Microsoft Example uses ExtensionAttribute15 being set to ‘NoSync’. In my examples, I’ve used ExtensionAttribute8 and setting to ‘DoNotSync’ as that’s how was the ExtensionAttribute and value selected in our Azure AD Connect.

To find any users within your AD which have ExtensionAttribute8 set to ‘DoNotSync’

 

To find any users within your AD which have ExtensionAttribute8 set to ‘DoNotSync’ within a specific OU

 

Or a specific user

To Add ‘DoNotSync’ in ExtensionAttribute8 to a specific user

 

Connecting to an Oracle Db from PowerShell

Sometimes you need to be able to connect to databases from PowerShell, in this case an I needed to connect to an Oracle database. First, you’ll need to download and install the Oracle client. In my case, I used the win64 11gR2 client. Make sure you get the appropriate version to suit the version of Oracle (e.g. Oracle 11g) and the Windows machine you are installing it on (win64). You can download clients from here.

After that, all you need is the PowerShell code and knowledge of the database you’re going to connect to. The PowerShell code below can be used to connect by either an Oracle SID or Service Name – choose which works best for you.

 

 

Office 365 License assignment using AD Groups

If you’ve searched for Office 365 licensing using AD groups, you usually find PowerShell using AD groups to provision Office 365 licenses, but that’s not what this is.  It’s also not using Azure Automation to run PowerShell scripts within Azure to perform licensing.  This is Azure AD functionality (in Preview) to perform Office 365 licensing based on AD group membership.

It’s been announced only recently, and if you’re currently performing licensing of users in Office 365 with a PowerShell scripts, you should definitely have a look at this functionality.

Azure AD Group Licensing using E3

Of course, this functionality is inside Azure Active Directory (currently in preview in the new Azure Portal). In order to assign licences via groups, you need to have an Azure AD licence. The Microsoft documentation states only an Azure AD Basic licence is required.  You can check that out information here.

If you just want to have a look at the group licensing functionality and don’t want to buy a license, you can sign up for a trial for the Basic and the Premium licences inside your Office 365 tenancy for no cost.  You could use a trial for either of these to give this a go.

Azure AD Basic License ScreenshotAzure AD Premium P1 License Screenshot

If you don’t have Azure, you can sign up with a free trial. There is currently an offer to sign up and get $200 credit for 30 days.  Check it out here.

If you’re signing up for the Azure with a different account or have signed up to Office 365 and azure with different credentials, check out here to find out a how to connect them together. To assign the licences in Office 365 through Azure AD Preview, you need to be able to access the Azure AD that your Office 365 tenancy uses from within the Azure portal.

Once you’ve made it into Azure, it’s actually pretty straight forward. Microsoft have put a guide together on how to do this.  I didn’t think it was easy to find, so here is a link to it – link.

If you’re interested in the full process to apply the functionality to groups, I took a screenshot for the entire process in the Azure AD Preview GUI.

Azure License Application Screenshot

Of course, groups are used to assign the licences. It requires a security group and works with groups synced from an on premise directory and Azure Cloud only groups, both direct and dynamic groups. However, you should be aware that currently nested groups are not supported. If you want to use them regardless, be aware that licenses will only be applied to those users with direct membership to the group and not those in the nested groups.

I was pleasantly surprised as the licences re-evaluated quite quickly when the group was updated, both adding and removing licences according the group membership.

 

One of things you should be careful of with the groups is changes to the license configuration. Every time this is changed, it removes licenses from every user that is in that group and after that it reapplies the licenses to he new configuration.  If you’ve got a lot of users in the group, it could potentially cause an outage to users and risk data loss.

To work around this, I would lean towards two methods.

1. Create a new group with the wanted license configurations.  Add all the users from the old group into the new one.  Make sure the new licenses apply and remove the users from the old group.

2. Create multiple groups, with each group having a specific license applies to it. (E.g. A group which only applies the E3 Exchange Online part of the license).  For each part of the license you want to assign, create a new group – although you’ll have to watch out for license dependencies. (E.g. OneDrive requires SharePoint.)

Migration is quite interesting, as you can have the same license on the same user both assigned directly from PowerShell and inherited from groups. Both will exist and you can get conflicts between them.  The good thing is that if there is a conflict you can easily get results of the license application in Azure to find any conflicts.

The recommended migration path is to leave your PowerShell script in place while group licensing is configured to provide exactly the same licenses for your users that the PowerShell script applies.  Once the same licences have been applied by group, disable the licensing PowerShell Script.  Then you need to start to remove the licenses assigned with PowerShell. Of course this is recommended to be performed in batches.

If you’re used to checking if users are licensed, both the Office 365 admin portal and the existing PowerShell cmdlets cannot see the group license application – they can only see if there is a license applied.  There are some PowerShell scripts available to provide information on the number of licences assigned and what type, but it’s still limited.  You can check those out here.  Once Azure AD is out of preview, hopefully we’ll get visibility in Office 365 and PowerShell.

 

Upgrading Nodejs on Raspberry Pi 2

I read about using the n helper for node to perform an upgrade, however this did not work for me.  Instead, I added another source and upgraded with apt-get.

If you see another way which consistently works, please let me know.  This worked for me, I had version 0.10 and this upgraded me to 0.12.

NOTE: I ran this as root, as sudo did not work.

You may want to perform an upgrade of packages before you do this.

The process should look something like this.

Then use apt-get to perform the upgrade.

The upgrade should look similar to this.

 

As I use node for node_red, I restarted node_red after the upgrade.  I experienced no issues with node_red after the upgrade to 0.12.

 

UPDATE 29 April 2016: Upgrading to other newer versions of node.js works this way too.  I recently just upgraded to v6 this same way, just use a different url.  You can find the urls for different versions here.  Although make sure that you check dependencies for other programs which depend on it, like node_red.

 

 

Static IP Address on Raspberry Pi 2

Using Raspian, howto set a static IP Address on the Raspberry Pi 2.

To set the Pi to a static IP address, you need to know the following:
address
netmask
network
broadcast
gateway

Note:  You should verify with your router configuration that the above details match network settings.  If you do not, it may result in not being able to connect to your raspberry pi on the network or your raspberry pi may lose access to everything outside your network.

Now you edit /etc/network/interfaces and replace iface eth0 inet dhcp with iface eth0 inet static and add the information as above.  Example below, replace with information to suit the network settings for your Pi.

Now reboot