Quantcast
Channel: Brian Reid – Brian Reid
Viewing all 187 articles
Browse latest View live

Is Your SenderID/SPF or DKIM Record Correctly Configured

$
0
0

With Microsoft having just announced that DKIM is coming to Office 365 soon (release notes here) and SenderID is already available, I thought this is a good time to write a blog on the use of DMARC to show if your records are correct.

DMARC is a protocol that allows you to see the effect of your SenderID/SPF records from the viewpoint of the recipient – as long as the recipient is one of the larger email receivers in the world, but that should be enough to help you validate your SenderID/SPF records. DMARC is currently in use at Outlook.com/Hotmail, Gmail, Facebook, Yahoo and Twitter to name some of the larger users of it.

To receive the results of a DMARC report you need to add your DMARC policy as a new TXT record to DNS. It looks something like this:

_dmarc IN TXT “v=DMARC1;p=reject;pct=100;rua=mailto:postmaster@dmarcdomain.com”

This TXT record for “_dmarc” at the root of your domain tells the receiver to report back to you all (pct=100) SenderID/SPF failures. They also report DKIM failures as well. If you send lots of email you might want to report on a smaller number of overall failures, say pct=20! The record says who to send the report to (rua=email address).

Finally, and one of the most useful bits of DMARC is that you can tell the receiver what to do with failures in your SenderID/SFP or DKIM record. For example in SenderID/SPF you can set the modifier “-all” to the end. This says that anything not covered in the rest of the record is to fail SenderID. The recipient then decides what to do with your email (as it might be your email, as you might have got your SPF or DKIM record wrong). In the example above you tell the recipient email system you would like them to reject (p=) the email. The policy could be quarantine or none. The policy p=none means accept the message, but report it back to the rua email address in the DMARC record.

p=none allows you to introduce SPF/DKIM and ensure no rejection of your messages at the recipient end, but get reporting back on the IP addresses from which your emails (or spoofed emails) are coming from.

Once you are sure your SPF/DKIM records are correct change your policy to p=quarantine and then when you are finally sure, change it to p=reject.

An example of creating a DMARC policy on your domain is shown below. The below example is AWS Route 53 DNS, but any DNS management console that support TXT records should work:

image

Finally, as this is just a quick intro to DMARC, you can use the ruf= qualifier to set the email address of per message failure reports.


Configuring Exchange On-Premises to Use Azure Rights Management

$
0
0

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at enabling on-premises Exchange Servers to use this cloud based RMS server. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Exchange Server integrates very nicely with on-premises RMS servers. To integrate Exchange on-premises with Windows Azure Rights Management you need to install a small service online that can connect Exchange on-premises to the cloud RMS service. On-premises file servers (classification) and SharePoint can also use this service to integrate themselves with cloud RMS.

You install this small service on-premises on servers that run Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. After you install and configure the connector, it acts as a communications interface between the on-premises IRM-enabled servers and the cloud service. The service can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40839

From this download link there are three files to get onto the server you are going to use for the connector.

  • RMSConnectorSetup.exe (the connector server software)
  • GenConnectorConfig.ps1 (this automates the configuration of registry settings on your Exchange and SharePoint servers)
  • RMSConnectorAdminToolSetup_x86.exe (needed if you want to configure the connector from a 32bit client)

Once you have all this software (or that which you need) and you install it then IT and users can easily protect documents and pictures both inside your organization and outside, without having to install additional infrastructure or establish trust relationships with other organizations.

The overview of the structure of the link between on-premises and Windows Azure Rights Management is as follows:

IC721938

Notice therefore that there are some prerequisites needed. You need to have an Office 365 tenant and turn on Windows Azure Rights Management. Once you have this done you need the following:

  • Get your Office 365 tenant up and running
  • Configure Directory Synchronization between on-premises Active Directory and Windows Azure Active Directory (the Office 365 DirSync tool)
  • It is also recommended (but not required) to enable ADFS for Office 365 to avoid having to login to Windows Azure Rights Management when creating or opening protected content.
  • Install the connector
  • Prepare credentials for configuring the software.
  • Authorising the server for connecting to the service
  • Configuring load balancing to make this a highly available service
  • Configuring Exchange Server on-premises to use the connector

Installing the Connector Service

  1. You need to set up an RMS administrator. This administrator is either the a specific user object in Office 365 or all the members of a security group in Office 365.
    1. To do this start PowerShell and connect to the cloud RMS service by typing Import-Module aadrm and then Connect-AadrmService.
    2. Enter your Office 365 global administrator username and password
    3. Run Add-AadrmRoleBasedAdministrator -EmailAddress <email address> -Role “GlobalAdministrator” or Add-AadrmRoleBasedAdministrator -SecurityGroupDisplayName <group Name> -Role “ConnectorAdministrator”. If the administrator object does not have an email address then you can lookup the ObjectID in Get-MSOLUser and use that instead of the email address.
  2. Create a namespace for the connector on any DNS namespace that you own. This namespace needs to be reachable from your on-premises servers, so it could be your .local etc. AD domain namespace. For example rmsconnector.contoso.local and an IP address of the connector server or load balancer VIP that you will use for the connector.
  3. Run RMSConnectorSetup.exe on the server you wish to have as the service endpoint on premises. If you are going to make a highly available solutions, then this software needs installing on multiple machines and can be installed in parallel. Install a single RMS connector (potentially consisting of multiple servers for high availability) per Windows Azure RMS tenant. Unlike Active Directory RMS, you do not have to install an RMS connector in each forest. Select to install the software on this computer:
    IC001
  4. Read and accept the licence agreement!
  5. Enter your RMS administrator credentials as configured in the first step.
  6. Click Next to prepare the cloud for the installation of the connector.
  7. Once the cloud is ready, click Install. During the RMS installation process, all prerequisite software is validated and installed, Internet Information Services (IIS) is installed if not already present, and the connector software is installed and configured
    IC002
  8. If this is the last server that you are installing the connector service on (or the first if you are not building a highly available solution) then select Launch connector administrator console to authorize servers. If you are planning on installing more servers, do them now rather than authorising servers:
    IC003
  9. To validate the connector quickly, connect to http://<connectoraddress>/_wmcs/certification/servercertification.asmx, replacing <connectoraddress> with the server address or name that has the RMS connector installed. A successful connection displays a ServerCertificationWebService page.
  10. For and Exchange Server organization or SharePoint farm it is recommended to create a security group (one for each) that contains the security objects that Exchange or SharePoint is. This way the servers all get the rights needed for RMS with the minimal of administration interaction. Adding servers individually rather than to the group results in the same outcome, it just requires you to do more work. It is important that you authorize the correct object. For a server to use the connector, the account that runs the on-premises service (for example, Exchange or SharePoint) must be selected for authorization. For example, if the service is running as a configured service account, add the name of that service account to the list. If the service is running as Local System, add the name of the computer object (for example, SERVERNAME$).
    1. For servers that run Exchange: You must specify a security group and you can use the default group (DOMAIN\Exchange Servers) that Exchange automatically creates and maintains of all Exchange servers in the forest.
    2. For SharePoint you can use the SERVERNAME$ object, but the recommendation configuration is to run SharePoint by using a manually configured service account. For the steps for this see http://technet.microsoft.com/en-us/library/dn375964.aspx.
    3. For file servers that use File Classification Infrastructure, the associated services run as the Local System account, so you must authorize the computer account for the file servers (for example, SERVERNAME$) or a group that contains those computer accounts.
  11. Add all the required groups (or servers) to the authorization dialog and then click close. For Exchange Servers, they will get SuperUser rights to RMS (to decrypt content):
    image
    image
  12. If you are using a load balancer, then add all the IP addresses of the connector servers to the load balancer under a new virtual IP and publish it for TCP port 80 (and 443 if you want to configure it to use certificates) and equally distribute the data across all the servers. No affinity is required. Add a health check for the success of a HTTP or HTTPS connection to http://<connectoraddress>/_wmcs/certification/servercertification.asmx so that the load balancer fails over correctly in the event of connector server failure.
  13. To use SSL (HTTPS) to connect to the connector server, on each server that runs the RMS connector, install a server authentication certificate that contains the name that you will use for the connector. For example, if your RMS connector name that you defined in DNS is rmsconnector.contoso.com, deploy a server authentication certificate that contains rmsconnector.contoso.com in the certificate subject as the common name. Or, specify rmsconnector.contoso.com in the certificate alternative name as the DNS value. The certificate does not have to include the name of the server. Then in IIS, bind this certificate to the Default Web Site.
  14. Note that any certificate chains or CRL’s for the certificates in use must be reachable.
  15. If you use proxy servers to reach the internet then see http://technet.microsoft.com/en-us/library/dn375964.aspx for steps on configuring the connector servers to reach the Windows Azure Rights Management cloud via a proxy server.
  16. Finally you need to configure the Exchange or SharePoint servers on premises to use Windows Azure Active Directory via the newly installed connector.
    • To do this you can either download and run GenConnectorConfig.ps1 on the server you want to configure or use the same tool to generate Group Policy script or a registry key script that can be used to deploy across multiple servers.
    • Just run the tool and at the prompt enter the URL that you have configured in DNS for the connector followed by the parameter to make the local registry settings or the registry files or the GPO import file. Enter either http:// or https:// in front of the URL depending upon whether or not SSL is in use of the connectors IIS website.
    • For example .\GenConnectorConfig.ps1 –ConnectorUri http://rmsconnector.contoso.com -SetExchange2013 will configure a local Exchange 2013 server
  17. If you have lots of servers to configure then run the script with –CreateRegEditFiles or –CreateGPOScript along with –ConnectorUri. This will make five reg files (for Exchange 2010 or 2013, SharePoint 2010 or 2013 and the File Classification service). For the GPO option it will make one GPO import script.
  18. Note that the connector can only be used by Exchange Server 2010 SP3 RU2 or later or Exchange 2013 CU3 or later. The OS on the server also needs to be include a version of the RMS client that supports RMS Cryptographic Mode 2. This is Windows Server 2008 + KB2627272 or Windows Server 2008 R2 + KB2627273 or Windows Server 2012 or Windows Server 2012 R2.
  19. For Exchange Server you need to manually enable IRM as you would do if you had an on-premises RMS server. This is covered in http://technet.microsoft.com/en-us/library/dd351212.aspx but in brief you run Set-IRMConfiguration -InternalLicensingEnabled $true. The rest, such as transport rules and OWA and search configuration is covered in the mentioned TechNet article.
  20. Finally you can test if RMS is working with Test-IRMConfiguration –Sender billy@contoso.com. You should get a message at the end of the test saying Pass.
  21. If you have downloaded GenConnectorConfig.ps1 before May 1st 2014 then download it again, as the version before this date writes the registry keys incorrectly and you get errors such as “FAIL: Failed to verify RMS version. IRM features require AD RMS on Windows Server 2008 SP2 with the hotfixes specified in Knowledge Base article 973247” and “Microsoft.Exchange.Security.RightsManagement.RightsManagementException: Failed to get Server Info from http://rmsconnector.contoso.com/_wmcs/certification/server.asmx. —> System.Net.WebException: The request failed with HTTP status 401: Unauthorized.”. If you get these then turn of IRM, delete the “C:\ProgramData\Microsoft\DRM\Server” folder to remove old licences, delete the registry keys and run the latest version of GetConnectorConfig.ps1, refresh the RMS keys with Set-IRMConfiguration –RefreshServerCertificates and reset IIS with IISRESET.

Now you can encrypt messages on-premises using your AADRM licence and so not require RMS Server deployed locally.

Creating Microsoft Rights Management Templates and Policies

$
0
0

This article is the sixth in a series of posts looking at Microsoft’s new Rights Management product set. In the previous post we looked at turning on the feature in Office 365 and in a later post we will see how to integrate this into your on-premises servers. In this post we will look at how to add templates to Microsoft Rights Management so that you can protect content with options other than the two default templates of “Company – Confidential” and “Company – Confidential View Only”. In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

One of the most requested features of Microsoft Right Management (that which used to be called Azure RMS) is custom templates. Custom templates let you define the protection policies you would like to roll out within your organization. Whether your organization is using Azure RMS in as part of your on premises deployment (via the RMS connector) or as part of Office 365, you can now do this via the Azure Management Portal.

First thing though is you (currently) need an Azure subscription. To get an Azure subscription you need a credit card, even though you are doing something in Azure that is not charged for. If all you do in Azure is what is in this blog then the card will never be charged – but as you can do so much you will probably not do just what is here in the blog. Note though that if you are doing this for lab and testing purposes, you will still need a credit card, but you cannot use the same card for more than one subscription. This is awkward, as it means each client I set this up for needs to provision their own lab tenant for me and then grant me permissions. A free Azure showing just the free features and no credit card requirement would be very useful. So to create your Microsoft Rights Management custom templates login to Azure at https://manage.windowsazure.com and sign up if necessary. Make sure you login with the same Office 365 global administrator account that you used when you where enabling RMS in Office 365.

  1. Do not sign in with your Microsoft ID (ex Live ID), always sign in here with your Organizational ID.
  2. Scroll down the left of the management screen and click Active Directory.
  3. To use Microsoft Rights Management you must have synced your on-premises Active Directory to Azure Active Directory and you can manage AD from here now as another free feature of this subscription you have created.
  4. Select Rights Management on the main screen:
    image
  5. Click your company name and then in the getting started screen select Manage your rights policy templates. You will see the two default templates of “Company – Confidential” and “Company – Confidential View Only”.
    image

These two templates protect content so that it can only be seen by members of your company (that is, users who’s account is synced to your Azure Active Directory tenant from your on-premises Active Directory). The “Company – Confidential” is editable by anyone in the company but cannot be opened if you are outside the company. The “Company – Confidential View Only” template is used to protect content that you want people in your company to view, but not edit. Again this is not viewable for users who do not have a login at your company. With the custom templates that you can now add you can designate different groups of users that will have access to documents protected with these templates, and you can define an access level or a list of rights for each of these groups. You can also control for how long content protected with these templates will be accessible, and you can define whether you want to require users to be online to access the content (thus, getting maximum control over their ability to access the document in case your policies change over time and ensuring all accesses to the documents get logged) or you want to allow them to cache document licenses so they get the ability to access the content from disconnected locations for up to a defined period of time.

To begin the process click Add on the bottom of the management web page and fill in the details required:
 image

Though you can have any name and description, the recommendation is to have a short enough list that your users can scan easily and that contains the company name and a brief outline of the rights granted. The following might be good example names:

  • Contoso – Board Level Confidential (Grants access to company board level staff only)
  • Contoso – All Full Time Employees Only (Visible to all Full Time Employees only)
  • Contoso – Legal Time Sensitive View Only (Grants the legal team online access to document for 7 days)

The name of the template appears in the yellow banner at the top of the application, so should clearly identify who can view and what they can do with the document. You can also set different languages. Applications will pick the most suitable language version, with US English being the top of the list as downloaded from Microsoft and so the version that appears if your specific language does not appear. Tick the circle and the template is added. The management console returns with “Successfully added the template. Clients won’t see the changes until they refresh their templates”. Note though that templates start their life as “archived” templates that cannot be used, but this is fine, as we have not finished configuring the template anyway. Click on the name of the template to get to the properties pages for the template. You will be presented with the “Quick Start” page:
image

Click “Get started” under “Configure rights for users and groups” to say which groups or users from your Active Directory can access the content protected with this template. This info is synced from your AD every three hours – so any new users or groups will appear in a short while: image

Keep in mind that your groups must have an email address for you to be able to use them in a custom template. If the group cannot be selected that will be the reason, but also remember the previous comment about a three hour sync from AD.

So if you do mail enable a group on-premises, it will be a good idea to continue creating this template now and then returning in a number of hours and adding the additional groups and rights. Select the rights you wish to apply to content protected with this template:
image

Once the rights are added for the groups/users selected you can add additional groups and users with different rights. To add additional names and descriptions (in different languages) click Configure on the top menu bar:
image

And if your template has content expiration settings, these too can be set under Configure:
image

And finally, before you publish your template, you need to decide if you content can only be viewed online (that is with a connection to Microsoft RMS) or if you are allowed to cache the rights to open the content and if so, how long you can cache those rights for:
image

At the top of the Configure screen select Publish to make a template accessible to users (or Archive to restrict access to it again, and of course all the content protected by the template). Save the template. Clients will be able to use the template to protect their content once they have updated their templates.

For example Exchange Server will refresh its templates every 30 minutes from the RMS Connector and OWA will show them at next login, but if you are running Exchange Online you need to refresh the templates manually. See http://technet.microsoft.com/library/dn642472.aspx#BKMK_RefreshingTemplates for the steps to refresh different clients and servers.

Enabling Microsoft Rights Management in SharePoint Online

$
0
0

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents in SharePoint. This means your cloud users and will have their data protected just by saving it to a document library.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

To enable SharePoint Online to integrate with Microsoft Rights Management you need to turn on RMS in SharePoint. You do this with the following steps:

  1. Go to service settings, click sites, and then click View site collections and manage additional settings in the SharePoint admin center:
    image
  2. Click settings and find Information Rights Management (IRM) in the list:
    image
  3. Select Use the IRM service specified in your configuration and click Refresh IRM Settings:
    image
  4. Click OK

Once this is done, you can now enable selected document libraries for RMS protection.

  1. Find the document library that you want to enforce RMS protection upon and click the PAGE tab to the top left of the SharePoint site (under the Office 365 logo).
    image
  2. Then click Library Settings:
    image
  3. If the site is not a document library, for example the picture below shows a “document center” site you will not see the Library Settings option. For these sites, navigate to the document library specifically and click the LIBRARY tab and then choose Library Settings:
    image
    image
  4. Click Information Rights Management
    image
  5. Select Restrict permissions on this library on download and add your policy title and policy description. Click SHOW OPTIONS to configure additional RMS settings on the library, and then click OK.
    image
  6. The additional options allow you to enforce restrictions to the document library such as RMS key caching (for offline use) and to allow the document to be shared with a group of users. This group must be mail enabled (or at least have an email address in its email address attribute) and be synced to the cloud.

To start using the RMS functionality in SharePoint, upload a document to this library or create a new document in the library. Then download the document again – it will now be RMS protected.

Using Microsoft Rights Management from Microsoft Office

$
0
0

This article is the second last in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents and emails in Microsoft Office 2010 or later. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have enable Microsoft Rights Management and your Microsoft Office user is connected to Office 365 via the accounts feature in Office:

image

In the above picture I have Microsoft Word showing that I am connected to my organizational account in Office 365.

Now that I am logged in to Office 365 I can access the RMS templates in Office 365 and protect documents with the templates I have created. To go through this process you need to follow these steps:

  1. Select File tab in the Office application that your document has been created in.
  2. You will find yourself in the Info area:
    image
  3. Click Protect Document > Restrict Access > Select your Office 365 tenant name (if you have more than one that you have used, you will see each that is enabled for RMS) > then select the protection template that you want to use:
    image
  4. The Protect Document button is highlighted and the RMS protection template is displayed, along with the description. If you are running a non-English version of Microsoft Office and your template has a matching language name, then you will see that instead. It will default to US-English for the languages that do not have a country option in the RMS template creation process:
    image
  5. Note though, that if you do not have any company employees working in the USA then you could use US-English for your own default language if your default language is not available – for example in the above I am on a UK-English version of Office and I could have used UK-English spellings and phrases for my name and description – who is to know that the US-English default is being shown!
  6. Back in the document the yellow banner, which can be closed, shows the protection level and the description:
    image

Though I have shown Word in the above examples, this is true for other Office documents as well, as long as you are using Office ProPlus edition, as you need that version to create RMS protected content. For example, an Outlook message is shown below:

image

Getting Exchange Message Sizing Raw Data

$
0
0

On the internet there are a number of resources for collecting the raw data needed to size Exchange Server deployments. These include:

This blog outlines my process for collecting the data needed for the average message size. What is missing from the above two posts is the ability to collect this data in one go for the last seven (or so days) and then get the message tracking log average over the busiest five or so days. Different countries have different working patterns and public holidays, and the scripts above only do the previous day (though Neil’s says the script uses averages across many days – it does not).

The Script

Download the script from the source and save to Notepad. Edit the top of the script by deleting the first five lines of the code (not the comments or the blank lines) and then replace them with the alternative lines shown below:

Remove These Lines

$today = get-date
$rundate = $($today.adddays(-1)).toshortdatestring()

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Replace With These Lines

$today = get-date
$whichDay = $args[0]
if (! $whichDay) { $whichDay = 1 }
$rundate = $($today.adddays(-$whichDay))

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Before running this script you need to do two things. First if you have different Exchange Server locations around the world you need to alter the script to only include the servers in those geographies that you want to search. Do this by changing the two lines that read as follows and add in the server name or a unique differentiator that will pick up just the mailbox and hub transport servers in these regions. Edge servers do not need to be considered:

  • $mbx_servers = Get-ExchangeServer |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver |? {$_.serverrole -match “hubtransport”} |% {$_.name}

These two lines are near the top and need to be changed to something like the following:

  • $mbx_servers = Get-ExchangeServer UK* |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver UK* |? {$_.serverrole -match “hubtransport”} |% {$_.name}

In the above I altered the script to find just the Exchange Servers starting with the letters UK. Adjust as needed. If you have one location/timezone worldwide then these changes are not needed.

If you are collecting data from an Exchange 2013 server(s) then change “hubtransport” in the $hts line to read “mailbox”.

Save the file as a PowerShell script (Get-MessageTrackingLogStats.ps1) and copy it to your Exchange Server.

Before you can run the script you need to check that you have enough tracking logs to process, or you will get invalid and skewed data. Run Get-TransportServer | FL name,*trackinglog* and make sure that you have a large enough quota for each day of logs and then check each of these folders and make sure they do not exceed this value. If they do, then you need to run the below script frequently before the log files are removed from the server rather than at the end of 7 or 14 days.

Now you can run the script with a number after it for how many days back you want to look at the logs. This will process the message tracking logs for that day, that number of days back. For example is you run the script Get-MessageTrackingLogStats.ps1 5 then it will look back for one day of data, five days ago. Repeat the running of the script until you have run it seven times. You will have one CSV file of tracking log reports for each of the last seven days:

  • Get-MessageTrackingLogStats.ps1 1
  • Get-MessageTrackingLogStats.ps1 2
  • Get-MessageTrackingLogStats.ps1 3
  • Get-MessageTrackingLogStats.ps1 4
  • Get-MessageTrackingLogStats.ps1 5
  • Get-MessageTrackingLogStats.ps1 6
  • Get-MessageTrackingLogStats.ps1 7

Zip up these files and take them to a computer running Microsoft Excel.

One the oldest file and process each of them as follows:
image

Hide columns C through E and H through to O and R through to U:
image

image

You now have a spreadsheet with Date/User/Received Total/Received MB Total/Sent Unique Total/Sent Unique MB Total:
image

Format as a table by selecting a cell in the table and clicking the Table button in the Insert tab:
image

The Table Tools / Design tab appears. Select Total Row check box from here. This will scroll you to the bottom of the table.
image

Select the third cell in the total row and drop-down the options and choose Average:
image

Drag this Average cell formula across all the four columns of numbers:
image

Modify the fourth column (Received MB Total) to read as follows =SUBTOTAL(101,[Received MB Total])/Table1[[#Totals],[Received Total]]*1024. This is the fourth column divided by the third column (the count of received messages) and multiplied by 1024 to convert it from MB to KB. The Exchange Bandwidth Calculator and Storage Calculator work on values in KB and not MB.
image

Repeat for the fourth column of figures. This time take the formula to be =SUBTOTAL(101,[Sent Unique MB Total])/Table1[[#Totals],[Sent Unique Total]]*1024 which is the last column divided by the previous and converted to KB. This total row now shows you the average messages sent and received and the average size of these messages in KB.

At the bottom of the spreadsheet add the following:

  • Sent/Mailbox/Day
  • Received/Mailbox/Day
  • Average/MessageSize/Day (KB)

Then copy the relevant data into the cells as shown The Average is the sum of the two averages divided by 2 (=Table1[[#Totals],[Received MB Total]]+Table1[[#Totals],[Sent Unique MB Total]]/2):
image

Then take all your numbers and reduce the number of decimal points shown:
image

Now that you have the raw data calculated, save this file as an Excel Workbook (and not a CSV).

Repeat for each of the CSV files you have available

Process The Daily Data

Create a new spreadsheet that contains the following three tables:

  • Messages Sent Per Day, with a row for each day and a column for each unique geographical region
  • Messages Received Per Day, with a row for each day and a column for each unique geographical region
  • Average Message Size (KB), with a row for each day and a column for each unique geographical region

This will look like the following, once all the data is copied from the source spreadsheets:

image

Now you can create a chart for each region for each table. The following shows Sent / Received and Average (KB). Each region is overlapping as a different line colour:

image
Note its possible here that HK (blue) is missing some data as it is unexpectedly low

image

image

Note the HK (blue) Sunday Average message size. This is probably becuase one or a few users sent a disproportionate number of larger emails on the quiet day of the week. For my analysis I am going to ignore it.

Now I have my peak Messages Sent Per Day for each region – and I take the highest value for the week and not the value for yesterday which is what I would get if I just ran the above script once.

This data can now go into the Bandwidth Calculator and generate accurate figures for the business in question.

Changing AD FS 3.0 Certificates

$
0
0

I am quite adept at configuring certificates and changing them around, but this one took me completely by surprise as it has a bunch of oddities to consider.

First the errors: Web Application Proxy (WAP) reported 0×80075213. In the event log the following:

The federation server proxy could not establish a trust with the Federation Service.

Additional Data
Exception details:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

User Action
Ensure that the credentials being used to establish a trust between the federation server proxy and the Federation Service are valid and that the Federation Service can be reached.

And

Unable to retrieve proxy configuration data from the Federation Service.

Additional Data

Trust Certificate Thumbprint:
431116CCF0AA65BB5DCCD9DD3BEE3DFF33AA4DBB

Status Code:
 

Exception details:
System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly.
   at System.Net.HttpWebRequest.GetResponse()
   at Microsoft.IdentityServer.Management.Proxy.StsConfigurationProvider.GetStsProxyConfiguration()

This was ultimately caused by the certificate on the AD FS Server having been replaced in the user interface, but this did not replace the certificate that HTTP was using or the published web applications and the certificates they were using.

So here are the steps to fix this issue:

AD FS Server

  1. Ensure the certificate is installed in the computer store of all the AD FS servers in the farm
  2. Grant permissions to the digital certificate to the ADFS Service account. Do this by right-clicking the new digital certificate in the MMC snap-in for certificates and choosing All Tasks > Manage Private Keys. Grant read permission to the service account that ADFS is using (you need to click Object Types and select Service Accounts to be able to select this user). Repeat for each server in the farm.
  3. In the AD FS management console expand service > certificates and ensure that the service communications certificate is correct and that the date is valid.
  4. Open this certificate by double-clicking it and on the Details tab, check the value for the Thumbprint.
  5. Start PowerShell on the AD FS Server and run Get-AdfsSslCertificate (not Get-AdfsCertificate). You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate.
  6. If they do not, then run Set-AdfsSslCertificate -Thumbprint XXXXXX (where XXXXX is the thumbprint value without spaces).
  7. Then you need to restart the ADFS Server.
  8. Repeat for each member of the farm, taking them out and in from any load balancer configuration you have. Ensure any SSL certs on the load balancer are updated as well.

Web Application Proxy

  1. Ensure the certificate is installed in the computer store of the web application proxy server as well. Permissions do not need to be set for this service.
  2. Run Get-WebApplicationProxySslCertificate. You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate.
  3. Restart WAP server and it should now connect to the AD FS endpoint and everything should be working again.
  4. Check that each Web Proxy Application is using the new certificate. Do this with Get-WebApplicationProxyApplication | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX (where XXXXX is the thumbprint value without spaces). This sets all your applications to use the same certificate. If you have different certificates in use, then do them one by one with Get-WebApplicationProxyApplication | Format-Table Name,ExternalCert* to see what the existing thumbprints are and then Get-WebApplicationProxyApplication “name” | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX to do just the one called “name”.

Exchange Web Services (EWS) and 501 Error

$
0
0

As is common with a lot that I write in this blog, it is based on noting down the answers to stuff I could not find online. For this issue, I did find something online by Michael Van “Hybrid”, but finding it was the challenge.

So rather than detailing the issue and the reason (you can read that on Michael’s blog) this talks about the steps to troubleshoot this issue.

So first the issue. When starting a migration test from an Exchange 2010 mailbox with an Exchange 2013 hybrid server (running the mailbox and CAS roles) behind a Kemp load balancing (running 7.16 – the latest release at the time of writing, but recently upgraded from version 5 through 6 to 7) I got the following error:

image

The server name will be different (thanks Michael for the screenshot). In my case this was my clients UK datacentre. My clients Hong Kong datacentre was behind a Kemp load balancer as well, but is only running Exchange 2010 and the New York datacentre has an F5 load balancer. Moves from HK worked, but UK and NY failed for different reasons.

The issue shown above is not easy to solve as the migration dialog tells you nothing. In my case it was also telling me the wrong server name. It should have been returning the External EWSUrl from Autodiscover for the mailbox I was trying to move, instead it was returning the Outlook Anywhere external URL from the New York site (as the UK is proxied via NY for the OA connections). For moves to the cloud, we added the External URL for EWS to each site directly so we would move direct and not via the only site that offered internet connected email.

So troubleshooting started with exrca.com – the Microsoft Connectivity Analyser. Autodiscover worked most of the time in the UK but Synchronization, Notification, Availability, and Automatic Replies tests (to test EWS) always failed after six and a half seconds.

Autodiscover returned the following error:

A Web exception occurred because an HTTP 400 – BadRequest response was received from Unknown.
HTTP Response Headers:
Connection: close
Content-Length: 87
Content-Type: text/html
Date: Tue, 24 Jun 2014 09:03:40 GMT

Elapsed Time: 108 ms.

And EWS, when AutoDiscover was returning data correctly, was as follows:

Creating a temporary folder to perform synchronization tests.

Failed to create temporary folder for performing tests.

Additional Details

Exception details:
Message: The request failed. The remote server returned an error: (501) Not Implemented.
Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException
Stack trace:
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.ValidateAndEmitRequest(IEwsHttpWebRequest& request)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable`1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.WebServices.Data.Folder.FindFolders(SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.Tools.ExRca.Tests.GetOrCreateSyncFolderTest.PerformTestReally()
Exception details:
Message: The remote server returned an error: (501) Not Implemented.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)

Elapsed Time: 6249 ms.

What was interesting here was the 501 and that it was always approx. 6 seconds before it failed.

Looking in the IIS logs from the 2010 servers that hold the UK mailboxes there were no 501 errors logged. The same was true for the EWS logs as well. So where is the 501 coming from. I decided to bypass Exchange 2013 for the exrca.com test (as my system is not yet live and that is easy to do) and so in Kemp pointed the EWS SubVDir directly to a specific Exchange 2010 server. Everything worked. So I decided it was an Exchange 2013 issue, apart from the fact I have lab environments the same as this (without Kemp) and it works fine there. So I decided to search for “Kemp EWS 501” and that was the bingo keyword combination. EWS and 501 or Exchange EWS and 501 got nothing at all.

With my environment back to Kemp >  2013 >  2010 I looked at Michaels suggestions. The first was to run Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com. I changed this slightly, as I was not convinced that I was connecting to the correct endpoint. The migration reported the wrong server name and the Exrca tests do not tell you what endpoint they are connecting to. So I tried Test-MigrationServerAvailability –ExchangeRemoteMove –Autodiscover –EmailAddress user-on-premises@domain.com

As AutoDiscover is reporting errors at times, the second of these cmdlets sometimes reported the following:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : AutoDiscover failed with a configuration error: The migration service failed to detect the
                     migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
                     or go back to the first step and retry using the Autodiscover service. Consider using the
                     Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
                     connectivity issues.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.AutoDiscoverFailedConfigurationErrorException:
                     AutoDiscover failed with a configuration error: The migration service failed to detect the
                     migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
                     or go back to the first step and retry using the Autodiscover service. Consider using the
                     Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
                     connectivity issues.
                        at Microsoft.Exchange.Migration.DataAccessLayer.MigrationEndpointBase.InitializeFromAutoDiscove
                     r(SmtpAddress emailAddress, PSCredential credentials)
                        at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessExcha
                     ngeRemoteMoveAutoDiscover()
IsValid            : True
Identity           :
ObjectState        : New

And when AutoDiscover was working (as it was random) I would get this:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : The ExchangeRemote endpoint settings could not be determined from the autodiscover response. No
                     MRSProxy was found running at ‘outlook.domain.com’.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.MigrationRemoteEndpointSettingsCouldNotBeAutodiscovere
                     dException: The ExchangeRemote endpoint settings could not be determined from the autodiscover
                     response. No MRSProxy was found running at ‘outlook.domain.com’. —>
                     Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
                     server ‘outlook.domain.com’ could not be completed. —>
                     Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
                     ‘
https://outlook.domain.com/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is
                     unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
                     from the server was ‘Basic Realm=”outlook.domain.com”‘. –> The remote server returned an
                     error: (401) Unauthorized.. —>
                     Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request is
                     unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
                     from the server was ‘Basic Realm=”outlook.domain.com”‘. —>
                     Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
                     an error: (401) Unauthorized.
                        — End of inner exception stack trace —
                        — End of inner exception stack trace —
                        at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
                     s1.<ReconstructAndThrow>b__0()
                        at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
                        at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
                     row(String serverName, VersionInformation serverVersion)
                        at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
                     .<CallService>b__0()
                        at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
                        at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
                     serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
                        — End of inner exception stack trace —
                        at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
                        at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
                     int(Boolean fromAutoDiscover)
                        — End of inner exception stack trace —
IsValid            : True
Identity           :
ObjectState        : New

This was returning the URL https://outlook.domain.com/EWS/mrsproxy.svc’ which is not correct for this mailbox (this was the OA endpoint in a different datacentre) and external Outlook access is not allowed at this company and so the TMG server in front of the F5 load balancer in the NY datacentre was not configured for OA anyway and browsing the the above URL returned the following picture, which is a well broken scenario but not the issue at hand here!

image

If OA (Outlook Anywhere) was available for this company, this is not what I would expect to see when browsing to the External EWS URL. To that end we have EWS URL’s are bypass TMG and go direct to the load balancer.

So now we have either no valid AutoDiscover response or EWS using the wrong URL. Back to the version of the cmdlet Michael was using as that ignores AutoDiscover: Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com

RunspaceId         : 5874c796-54ce-420f-950b-1d300cf0a64a
Result             : Failed
Message            : The connection to the server ‘ewsinukdatacentre.domain.com’ could not be completed.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
                     server ‘ewsinukdatacentre.domain.com’ could not be completed. —>
                     Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
                     ‘
https://ewsinukdatacentre.domain.com/EWS/mrsproxy.svc’ failed. Error details: The remote server returned
                     an unexpected response: (501) Invalid Request. –> The remote server returned an error: (501) Not
                     Implemented.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The
                     remote server returned an unexpected response: (501) Invalid Request. —>
                     Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
                     an error: (501) Not Implemented.
                        — End of inner exception stack trace —
                        — End of inner exception stack trace —
                        at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
                     s1.<ReconstructAndThrow>b__0()
                        at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
                        at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
                     row(String serverName, VersionInformation serverVersion)
                        at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
                     .<CallService>b__0()
                        at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
                        at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
                     serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
                        — End of inner exception stack trace —
                        at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
                        at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
                     int(Boolean fromAutoDiscover)
IsValid            : True
Identity           :
ObjectState        : New

Now we can see the 501 error that exrca was returning. It would seem that the 501 is coming from the Kemp and not from the endpoint servers, which is why I could not located it in IIS or EWS logs and so in the Kemp System Message File (logging options > system log files) I found the 501 error:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.1.2:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

Where the first IP address was a Microsoft datacentre IP and the second was the Kemp listener IP.

It turns out this is due to the Kemp load balancer not returning to Microsoft the Continue-100 status that it should get. Microsoft waits 5 seconds and then sends the data it would have done if it got the response back. At this point the Kemp blocks this data with a 501 error.

It is possible to turn off Kemp’s processing of Continue-100 HTTP packets so it lets them through and this is covered in http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html:

 

In the version at my clients which was upgraded to version 7.0-16 from a v5 to v6 to v7 it was defaulting to RFC Conformant, but needs to be Ignore Continue-100 to work with Office 365. Now EWS works, hybrid mailbox moves work, and AutoDiscover is always working on that server – so a mix of issues always caused by the same setting in the Kemp.


Intermittent Error 8004789A with AD FS and WAP 3.0 (Windows Server 2012 R2)

$
0
0

This error appears when you attempt to authenticate with Office 365 using AD FS 3.0 – but only sometimes, and often it was working fine and then it starts!

I’ve found this error is due to two things, though there are other reasons. The full list of issues is at http://blogs.technet.com/b/applicationproxyblog/archive/2014/05/28/understanding-and-fixing-proxy-trust-ctl-issues-with-ad-fs-2012-r2-and-web-application-proxy.aspx.

I found that this occured if the WAP servers and the ADFS servers where at different timezones (not just times)

And I found that if the domain schema level is no 2012 R2 then you need to run the script included to copy settings between the ADFS servers. Certificates expire every 20 days, or when they are manually changed, so this script needs running by hand at or before these regular changes.

The second of these two issues though has been fixed in the June 2014 update for Windows Server 2012 R2. The fix is documented in http://support.microsoft.com/kb/2964735 and the update (the June 2014 update) is at http://support.microsoft.com/kb/2962409

Continuing Adventures in AD FS Claims Rules

$
0
0

There is an excellent article at http://blogs.technet.com/b/askds/archive/2012/06/26/an-adfs-claims-rules-adventure.aspx which discusses the use of Claims Rules in AD FS to limit some of the functionality of Office 365 to specific network locations, such as being only allowed to use Outlook when on the company LAN or VPN or to selected groups of users. That article’s four examples are excellent, but can be supplemented with the following two scenarios:

  • OWA for Devices (i.e. OWA for the iPhone and OWA for Android)
  • Using Outlook on a VPN, but needing to set up the profile when also on the VPN

OWA for Devices and AD FS Claims Rules

OWA for Devices is an app available from the Apple or Android store and provides mobile and offline access to your email, adding to the features available with ActiveSync. Though OWA for Devices is OWA, it also uses AutoDiscover to configure the app. Therefore if you have an AD FS claim rule in place the blocks AutoDiscover you will find that OWA for Devices just keeps prompting for authentication and never completes, though if you click Advanced and enter the server name by hand (outlook.office365.com) then it works.

Using OWA for iPhone/iPad diagnostics (described on Steve Goodman’s blog at http://www.stevieg.org/2013/08/troubleshooting-owa-for-iphone-and-ipad/) you might find the following entries in your mowa.log file:

[NetworkDispatcher exchangeURLConnectionFailed:withError:] , “Request failed”, “Error Domain=NSURLErrorDomain Code=-1012 “The operation couldn’t be completed. (NSURLErrorDomain error -1012.)” UserInfo=0x17dd2ca0 {NSErrorFailingURLKey=https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml, NSErrorFailingURLStringKey=https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml}”

And following the failure to reach https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml successfully you might see the following:

[NetworkDispatcher exchangeURLConnectionFailed:withError:] , “Request failed”, “Error Domain=ExchangeURLConnectionError Code=2 “Timeout timer expired” UserInfo=0x17dc6eb0 {NSLocalizedDescription=Timeout timer expired}”

and the following:

[PAL] PalRequestManager.OnError(): ExchangeURLConnectionError2: Timeout timer expired

[autodiscover] Autodiscover search failed. Error: ExchangeURLConnectionError2: Timeout timer expired

[actions] Action (_o.$Yd 27) encountered an error during its execution: Error: Timeout timer expired

[autodiscover] TimerCallback_PeriodicCallback_HandleFailedAutodiscoverSearch took too long (XXXms) to complete

The reason is that the first set of AutoDiscover lookups work, as they are connecting to the on-premises endpoint and authentication for this endpoint does not go through AD FS. When you reach the end of the AutoDiscover redirect process you need to authenticate to Office 365, and that calls AD FS – and that might be impacted by AD FS claims rules.

The problem with OWA for Devices and claims rules can come about by the use of the AD FS Claims Rules Policy Builder wizard at http://gallery.technet.microsoft.com/office/Client-Access-Policy-30be8ae2. This tool can be adjusted quite easily for AD FS 2012 R2 (change line 218 to read “If (($OSVersion.Major –eq 6) –and ($OSVersion.Minro –ge 2))” (look for AD FS servers that are –ge [greater or equal to 2] rather than just 2). The AD FS Claims Rules Policy Builder has a setting called “Block only external Outlook clients” – and this blocks Outlook, Exchange Web Services, OAB and AutoDiscover from being used apart from on a network range that you provide. The problem is that you might want to block Outlook externally, but allow OWA for Devices to work.

To solve this problem make sure you do not block AutoDiscover in the claims rules. In the “Block only external Outlook clients” option, its the 2nd claim rule that needs to be deleted, so you end up with just the following:

image

With AutoDiscover allowed from all locations, but EWS and RPC blocked in Outlook, you still block Outlook apart from on your LAN, but you do not block OWA for Devices. Use exrca.com to test Outlook Autodiscover after making this change and you should find that the following error is not present when AutoDiscover gets to the autodiscover-s.outlook.com stage.

The Microsoft Connectivity Analyzer is attempting to retrieve an XML Autodiscover response from URL https://autodiscover-s.outlook.com/Autodiscover/Autodiscover.xml for user user@tenant.mail.onmicrosoft.com.

The Microsoft Connectivity Analyzer failed to obtain an Autodiscover XML response.

An HTTP 401 Unauthorized response was received from the remote Unknown server. This is usually the result of an incorrect username or password. If you are attempting to log onto an Office 365 service, ensure you are using your full User Principal Name (UPN).HTTP Response Headers:

If you changed an existing claims rule you either need to restart the AD FS service or wait a few hours for it to take effect.

Outlook with AD FS Claims Rules and VPN’s

As mentioned above, Outlook can be limited to working on a given set of IP ranges. If you block AutoDiscover from working apart from on these ranges (as was the issue causing OWA for Devices to fail above) then you can have issues with VPN connections.

Imagine the following scenario: Web traffic goes direct from the client, but Outlook traffic to Office 365 goes via the LAN. A split tunnel VPN scenario.

In this example, if AutoDiscover and initial profile configuration has never run you have a claims rule that allows Outlook to work on the VPN (as the public IP’s for your LAN/WAN are listed in the claims rules as valid source addresses), but AutoDiscover fails due to the same rules (as the web traffic is coming from the client and not the LAN). Therefore AutoDiscover fails and Outlook is never provisioned, so it appears as if Outlook is being blocked by the claim rules even though you are on the correct network for those claim rules.

Creating Mailboxes in Office 365 When Using DirSync

$
0
0

This blog post describes the process to create a new user in Active Directory on-premises when email is held in Office 365 and DirSync is in use. With DirSync in use the editable copy of the user object is on-premises and most attributes cannot be modified in the cloud.

Creating the User

  1. Open Active Directory Users and Computers on a Windows 2008 R2 or later server. Ensure that Advanced Features is enabled (View > Advanced Features)
    1. Note that if you do not have 2008 R2 or later then use ADSI Edit to make the changes mentioned below that are made on the Attribute Editor tab in Active Directory Users and Computers 2008 R2 or later.
  2. Create an Active Directory user as you normally would. Do not complete any Exchange server properties if you are requested to do so. Completing Exchange on-premises will make a mailbox on premises that will then need to be migrated to Exchange Online. This document describes creating the mailbox online.
  3. Ensure that the user’s email address on the General tab of the AD properties is correct.
  4. Ensure that the users login name on the Account tab is as follows:
    1. User Logon Name: The first part of their email address
    2. The Domain name drop-down: The second part of their email address (not the AD domain name if they are different)
    3. User Logon Name (Pre Windows 2000): DOMAIN as provided and use the first part of the email address (i.e. first.last etc). If first part of email is too long enter as much as you can and ensure it is unique within domain)

Setting the Email Address Properties

  1. On the Attribute Editor tab ensure that Filter > Show only attributes that have values is not selected. Then find and enter the following information:
    1. proxyAddresses: SMTP:primary.email@domain for this user – SMTP needs to be in capitals. Then add additional email addresses as required, but these start with smtp: in lower case.
    2. targetAddress: SMTP:first_part_of_email@tennantname.onmicrosoft.com
    3. Note that both these addresses need to be unique within your directory – Attribute Editor will not check them for uniqueness but they will fail to replicate to Azure with DirSync if they are not unique.
  2. Click OK and close the account creation dialog.
  3. Within three hours this object will sync to Windows Azure Active Directory.
    1. This can be speeded up by logging into the DirSync server and starting PowerShell
    2. Type “Import-Module DirSync” in PowerShell
    3. Type “Start-OnlineCoexistenceSync” in PowerShell – DirSync will replicate now rather than waiting up to three hours.
  4. Check that the DirSync process was successful – if you have entered values that are not unique then DirSync will fail to replicate them and you will need to fix them on-premises and replicate them again.
  5. Licence the user in Office 365 by logging into https://portal.office.com and granting a licence to this user that contains an Exchange Online licence. The mailbox will be created automatically shortly after this.

Office 365 ProPlus XML Config Files Are Case Sensitive

$
0
0

The XML file used for the configuration of Office 365 ProPlus is case sensitive. In a client I have been working with the UpdatePath value in the install XML file was accidently specified using “Updatepath” and not “UpdatePath” (case sensitive). This resulted in the UpdateUrl in the registry (HKLM\Software\Microsoft\Office\15.0\ClickToRun\Scenario\INSTALL\UpdateUrl) not being set correctly, and even though an update path was specified in the install XML, Office was still going to the internet to do updates.

This results in users getting prompted to update Office themselves even though you have pointed the XML file Office was installed with to go to a file share or specific path:

image

If you want to see if you have a working copy of Office that updates from the file share correctly then please open the registry editor and view the following location: HKLM\Software\Microsoft\Office\15.0\ClickToRun\Scenario\INSTALL

In this registry location look for the UpdateUrl key. This key should be present and pointing to the file share where Office is deployed from (the UpdatePath value in the XML should be listed here). If it is missing then you need to run the Office installation file again (setup /configure updated.xml) with UpdatePath correctly specified for this to be reset – do not change the registry keys by hand as this does not work.

clip_image007

Speaking at TechEd Europe 2014

$
0
0

I’m please to announce that Microsoft have asked me to speak on “Everything You Need To Know About SMTP Transport for Office 365” at TechEd Europe 2014 in Barcelona. Its going to be a busy few weeks as I go from there to the MVP Summit in Redmond, WA straight from that event.

image

My session is going to see how you can ensure your migration to Office 365 will be successful with regards to keeping mail flow working and not seeing any non-deliverable messages. We will cover real world scenarios for hybrid and staged migrations so that we can consider the impact of mail flow at all stages of the project. We will look at testing mail flow, SMTP to multiple endpoints, solving firewalling issues, and how email addressing and distribution group delivery is done in Office 365 so that we always know where a user is and what is going to happen when they are migrated.

Compliance and hygiene issues will be covered with regards to potentially journaling from multiple places and the impact of having anti-spam filtering in Office 365 that might not be your mail flow entry point.

We will consider the best practices for changing SMTP endpoints and when is a good time to change over from on-premise first to cloud first delivery, and if you need to maintain on-premises delivery how should you go about that process.

And finally we will cover troubleshooting the process should it go wrong or how to see what is actually happening during your test phase when you are trying out different options to see which works for your company and your requirements.

Full details of the session, once it goes live, are at http://teeu2014.eventpoint.com/topic/details/OFC-B350 (Microsoft ID login needed to see this). Room and time to be announced.

Exchange Online Free/Busy Issues with OAuth Authentication

$
0
0

OAuth authentication is a new server to server authentication model available in Exchange 2013 SP1 and later and Exchange Online (Office 365). With OAuth enabled and Exchange hybrid in place and where you have multiple endpoints of Exchange Server on-premises and those on-premises Exchange Servers are different versions then you might have issues getting Exchange Online to On-Premises free/busy lookups to work.

Here is the scenario:

Company with Exchange 2010 servers in multiple internet connected sites, going hybrid to Exchange Online.

Exchange Online tenant created and hybrid mode put in place between Exchange Online and Exchange Server 2013 on-premises. In the site where the Exchange 2013 hybrid servers are located there are Exchange 2010 SP3 servers. As hybrid mode was set up with SP1, OAuth was enabled.

Exchange 2010 in the remote sites is configured with an ExternalURL for EWS. Therefore a free/busy lookup from an Office 365 user to a mailbox in one of these remote sites goes direct to the EWS endpoint on Exchange 2010 – it is not proxied via the 2013 hybrid server.

With OAuth enabled this configuration will fail as Exchange Online will use OAuth to authenticate to Exchange 2010 on-premises and fail. The IIS logs will contain entries such as this:

2014-07-22 19:39:34 10.100.28.73 POST /ews/exchange.asmx – 443 – 10.100.28.220 ASProxy/CrossForest/EmailDomain//15.00.0985.008 401 0 0 0

Where the 401 indicates authentication failed and the path ASProxy/CrossForest/EmailDomain indicating OAuth in use. There will be no entries in the IIS log for the Federation Org type of authentication.

If the EWS connection for free/busy goes via the 2013 hybrid server (ExternalURL for the remote site is null) then the free/busy lookup works, or if the OAuth connector in Exchange Online is disabled and EWS lookup for free/busy goes direct to the remote Exchange 2010 server then free/busy lookups work.

So if you want OAuth and direct EWS connections to remote sites for free/busy you need Exchange 2013 at those remote sites. If you want to have Exchange 2013 hybrid servers only at your primary site (for mail flow) and OAuth as well (for eDiscovery cross-forest) then you need to proxy your EWS free/busy requests via the Exchange 2013 hybrid server.

This is a known issue in Exchange and may be fixed in the future.

Installing Office 365 ProPlus Click To Run via GPO Deployment

$
0
0

Office 365 ProPlus can be deployed via Group Policy, but there are a few things that you need to know and do first. These are:

  1. You cannot use the “Software Installation” features of GPO’s to deploy the Office 365 ProPlus click to run software as this is an exe file, and “Software Installation” runs MSI files.
  2. You cannot run software with elevated installation rights, as the setup.exe shells out to other processes to run the installation (the officeclick2run.exe service).

Therefore you need to deploy the software via a computer startup script. But this is not simple either as startup scripts run each time the computer starts up (obviously!) but will run regardless of whether the software is already installed. Therefore you need to run the installation by way of a startup script that first checks if Office 365 ProPlus Click To Run has already been installed or not.

To do this you need to following:

  1. A read only file share containing the Office 365 ProPlus Click To Run files
  2. A read/write file share to store log files on (the deployment script logs the start and completion of the installation in a central location)
  3. An XML file to install Office 365 ProPlus Click To Run customised to your environment and the fact that you are using GPO deployment
  4. A batch file to detect an existing Office 365 ProPlus Click To Run deployment and if not present to install Office 365 ProPlus Click To Run from your file share.
  5. And finally the Office Deployment Tool setup program.

Steps 1 and 4 are part of a standard Office 365 ProPlus Click To Run deployment process and so not covered in this blog post. But once you have downloaded the Office Deployment Tool and created the XML file in step 3 you can run the deployment tool with setup.exe /download config.xml to download the Office binaries to the file share mentioned in step 1.

So here are the steps and details for doing all this for GPO deployment:

Creating Deployment File Shares

Create a software deployment file share that you have read/write access to and everyone else read only and create a folder called Office365ProPlus inside this to store the binaries.

Create a second file share that everyone has read/write access to (or CREATOR OWNER has write so that only the creator of the file can write it to the share and others can read or not see it at all). Create a sub folder in InstallLogs called Office365ProPlus.

In my demo these two shares and subfolders are called \\server\Software\Office365ProPlus and \\server\InstallLogs\Office365ProPlus.

Create an XML File for Office 365 ProPlus Click To Run Deployment

This XML file is as follows and is saved to \\server\Software\Office365ProPlus root folder. Call this file config.xml.

<Configuration>

  <Add SourcePath=”\\server\Software\Office365ProPlus\” OfficeClientEdition=”32″ >
    <Product ID=”O365ProPlusRetail”>
      <Language ID=”en-us” />
    </Product>
  </Add>

  <Updates Enabled=”TRUE” UpdatePath=”\\server\Software\Office365ProPlus\” />

  <Display Level=”None” AcceptEULA=”TRUE” />

  <Logging Path=”%temp%” />

</Configuration>

The important entries of no display and the Extended User Licence Agreement having been accepted are important, as GPO deployment works as a system service and so cannot display anything to the screen. Also see http://technet.microsoft.com/en-us/library/jj219426(v=office.15).aspx for the XML reference file for other settings you can contain here such as updates from the Internet (UpdatePath=””) or no updates (Updates Enabled=”FALSE”), multiple languages (add more <Language ID=”xx-xx” /> nodes to the file), etc.

Download the Office 365 ProPlus Click To Run Binaries

Download the Office Deployment Tool from http://www.microsoft.com/en-gb/download/details.aspx?id=36778 and if you downloaded this a few months ago, download it again as it changes frequently and improves the setup process.

Install this software to get setup.exe and some example XML files. Copy setup.exe to \\server\Office365ProPlus.

Run \\server\Office365ProPlus\setup.exe /download \\server\Office365ProPlus\config.xml to download the latest version (or the specified version if you have added Version=”15.1.2.3″ to config.xml where 15.1.2.3 is the build number you want to install). This will create the Office\Data folder in the \\server\Office365ProPlus share and download the binaries and any languages specified in the XML to that location – do not modify the folder structure as the Office Deployment Tool will expect this structure to find the files under during installation.

Create A CMD File To Script The Install

In Notepad create a cmd file and save it to \\server\Office365ProPlus as well. It will eventually go in the GPO folder location, but this will be your master copy. The cmd file will look like the following and for this demo is called _InstallOfficeGPO.cmd

setlocal

REM *********************************************************************
REM Environment customization begins here. Modify variables below.
REM *********************************************************************

REM Set DeployServer to a network-accessible location containing the Office source files.
set DeployServer=\\server\Software\Office365ProPlus

REM Set ConfigFile to the configuration file to be used for deployment (required)
set ConfigFile=\\server\Software\Office365ProPlus\config.xml

REM Set LogLocation to a central directory to collect script log files (install log files are set in XML file).
set LogLocation=\\server\InstallLogs\Office365ProPlus

REM *********************************************************************
REM Deployment code begins here. Do not modify anything below this line.
REM *********************************************************************

IF NOT “%ProgramFiles(x86)%”==”” (goto ARP64) else (goto ARP86)

REM Operating system is X64. Check for 32 bit Office in emulated Wow6432 registry key
:ARP64
reg query HKLM\SOFTWARE\WOW6432NODE\Microsoft\Office\15.0\ClickToRun\propertybag
if NOT %errorlevel%==1 (goto End)

REM Check for 32 and 64 bit versions of Office 2013 in regular registry key.(Office 64bit would also appear here on a 64bit OS)
:ARP86
reg query HKLM\SOFTWARE\Microsoft\Office\15.0\ClickToRun\propertybag
if %errorlevel%==1 (goto DeployOffice) else (goto End)

REM If 1 returned, the product was not found. Run setup here.
:DeployOffice
echo %date% %time% Setup started. >> %LogLocation%\%computername%.txt
start /wait %DeployServer%\setup.exe /configure %ConfigFile%
echo %date% %time% Setup ended with error code %errorlevel%. >> %LogLocation%\%computername%.txt

REM If 0 or other was returned, the product was found or another error occurred. Do nothing.
:End

Endlocal

This will be run by GPO and at computer startup look for the Click To Run registry key that indicates Office has been installed. If not found for 64 or 32 bit OS’s and 64 or 32 bit installations of Office then it will deploy office.

Create A Group Policy Object

Create in your domain a GPO object over an OU that contains the computers you want to install Office 365 ProPlus Click To Run on. This will run on all computers in this OU, so start with a test OU containing one or a few computers or use permissions to lock the GPO object down to specific computer accounts.

In this GPO set the following:

  1. A startup script that runs _InstallOfficeGPO.cmd. A startup script will have a folder the script is located in (click Show Files button in the GPO editor) and copy the above cmd file from the Office deployment share to this folder.
  2. Then click Add and select the file – there are no script parameters.
  3. Your GPO object will look like this.
    image
  4. In Adminstrative Templates/System/Scripts set the Maximum wait time for Group Policy scripts to 1800 seconds. This is 30 minutes. The default is 10 minutes (600 seconds) but I have found Office installs take just over ten minutes on a LAN and longer if the fileshare is remote to the client computer. The script will be cancelled if it takes over 30 minutes, so you may need a higher value for your network.

Deploy Office 365 ProPlus Click To Run

Run gpupdate /force on a test computer that is under the scope of your GPO object and then reboot the computer. The installation will start automatically and Office will be ready to use a few minutes after reboot. Office takes about 10 minutes to fully install on a LAN but can be used about 2 or 3 minutes after installation starts. Do not reboot the PC in those 10 minutes.

Check \\server\InstallLogs\Office365ProPlus for a file named after the computer. This will have two lines, one for the start of the deployment and one at the end (with “Setup ended with error code 0” if successful).


Group Policy Import To Fix Google Chrome v37 Issues With Exchange Server and Microsoft CRM

$
0
0

A recent update to Google Chrome (37.0.2062.120) removed the ability to support modal dialog boxes. This are dialogs that require your attention and stop you going back to the previous page until you have completed the info required – these are very useful in workflow type scenarios.

Google claim that as 0.004% of web sites use them (from Google anonymous statistics gathering that you can opt into in Chrome) they are justified in removing support for them – but they have not removed other things that have the same level of support!

With this version of Chrome (or the Chromium open source browser) there is a work around until April 30th 2015 that will allow modal dialogs to work again. Without this work around clicking links in OWA and ECP in Exchange 2010 and OWA and EAC in Exchange Online and Exchange 2013 will not popup. This can cause issues such as the inability to attach files in OWA and to create objects in ECP/EAC for the administrator. Popups in Microsoft CRM also do not work.

As a work around you could use a different browser, but if Chrome works for you (or does not in this case) and you are joined to a domain then you can download the following GPO export file and import it into your Active Directory to enable modal dialogs to work again in Exchange Server, Office 365 and Microsoft CRM products.

To download and import this GPO file to enable Chrome modal dialog box functionality to resume (until 30th April 2015, when Google stop allowing the work around) follow these steps:

  1. Download Google Chrome Show Modal Dialog Before 30 April 2015.zip
  2. Copy to a domain controller and expand the zip file. Ensure the contents of the zip file are not placed directly on your desktop as you cannot import from the desktop directly, so if you expand the zip to the desktop then copy the one folder that was created into a new subfolder.
  3. Start Group Policy Management MMC admin tool.
  4. Expand Forest > Domain > Your Domain > Group Policy Objects.
  5. Right click “Group Policy Objects” and choose New
  6. Create a new GPO called “Chrome and Chromium Modal Dialog Box Allow”:
    image
  7. Right click “Chrome and Chromium Modal Dialog Box Allow” GPO that you just made and choose Import Settings
  8. Proceed through the import wizard. You do not need to backup this new GPO on the second page of the dialog as the new GPO is empty.
  9. On the third page of the wizard browse to the parent folder containing the contents of the download above:
    image
  10. Click Next and you should see one backed up GPO listed:
    image
  11. Click Next to import this. If you click View Settings first a web page will open showing you that this GPO sets two registry keys for the computer and two registry keys for the user. These set SOFTWARE\Policies\Chromium\EnableDeprecatedWebPlatformFeatures and Software\Policies\Google\Chrome\EnableDeprecatedWebPlatformFeatures (for both Chromium and Chrome browsers) with a reg key (type string) 1:ShowModalDialog_EffectiveUntil20150430
  12. Proceed with Next and then Finish and the import will begin:
    image
  13. Click OK.
  14. Now link the GPO object to the root of your domain so it impacts all users and to the root of any OU that blocks inheritance. Import to other domains as above or link from this domain depending upon your current policy for managing GPO cross domains.
  15. Delete the zip and folder you downloaded. They are not needed any more.

The Case of the Disappearing Folders

$
0
0

Here is a issue I have come across at one of my current clients – you create a folder in Outlook 2013 when in the “Mail” view (showing only mail folders – your typical default view) and the folder does not get created. For example, in the below picture the user is in the middle of creating a folder called “Test Inline” as a child of the “SO” folder:

image

Upon pressing Enter, the folder disappears and fails to be created:

image

So where does one see this issue? It happens when the parent folder in question, in this case the “SO” folder is created by Microsoft’s PST Capture Tool. The PST Capture Tool creates a parent folder in the Online Archive in Exchange (in this case Exchange Online but it does not matter which Exchange Server) named after the PST file, so in this case SO.pst was uploaded by the PST Capture Tool. Any attempt to create folders inline below this parent folder fails! If you drag content into this folder it will not allow you to drop the content in, and the folder appears to be read-only.

If you change Outlooks view to Folder view (click the … on the Outlook 2013 view bar to the right) then you can create folders (using a dialog) and that works fine – this is how “Test Dialog” was made in the above pictures.

In Outlook 2010 all works as expected! In Outlook 2013 the issue appears to be the way Outlook handles folders that have a MAPI property on the folder created with a null value. In tools such as MFCMapi and OutlookSpy you can view the MAPI properties of a folder and the folder created by PST Capture Tool has a property call PR_CONTAINER_CLASS_W with a null (empty) value. Normally, Outlook will make folders that have “IPF.Note” as the value of this folder, if this is a mail and notes folder (i.e. not a calendar or contact etc folder). But clearly there is a problem, as Folder view allows you to create subfolders when the parent’s PR_CONTAINER_CLASS_W value is null and so does Outlook 2010 and coincidently does OWA!

The fix, but I do not have it ready yet, is to run an EWS script to reset the PR_CONTAINER_CLASS_W property of this folder to IPF.Note or wait for an update to Outlook from Microsoft, and for that I have contacted them.

With thanks to fellow MVP Jaap Wesselius for double-checking this for me and testing it in Outlook 2010.

SSL and Exchange Server

$
0
0

In October 2014 or thereabouts it became known that the SSL protocol (specifically SSL v3) was broken and decryption of the encrypted data was possible. This blog post sets out the steps to protect your Exchange Server organization regardless of whether you have one server or many, or whether or not you use a load balancer or not. As load balancers can terminate the SSL session and recreate it, it might be that changes are needed on your load balancer or maybe directly on the servers that run the CAS role. This blog post will cover both options and looks at the settings for a Kemp load balancer and a JetNexus load balancer.

Of course being an Exchange Server MVP, I tend to blog about Exchange related stuff, but actually this is valid for any server that you publish to the internet and probably valid of any internal server that you encrypt traffic to via the SSL suite of protocols. Microsoft outline the below configuration at https://technet.microsoft.com/en-us/library/security/3009008.aspx.

The steps in this blog will look at turning off the SSL protocol in Windows Server and turning on the TLS protocol (which does the same thing as SSL and is interchangeable for SSL, but more secure at the time of writing – Jan 2015). Some clients do not support TLS (such as Internet Explorer on Windows XP Service Pack 2 or earlier, so securing your servers as you need to do may stop some home users connecting to your Exchange Servers, but as XP SP2 should not be in use in any business now, these changes should not affect desktops. You could always use a different browser on XP as that might mitigate this issue, but using XP is a security risk in an of itself anyway! To disable clients from connecting to SSL v3 sites requires a client or GPO setting and this can be found via your favourite search engine.

Note that the registry settings and updates for the load balancers in this blog post will restrict client access to your servers if your client cannot negotiate a mutual cipher and secure channel protocol. Therefore care and testing are strongly advised.

Testing and checking your changes

Before you make any changes to your servers, especially internet facing ones, check and document what you have in place at the moment using https://www.ssllabs.com/ssltest. This service will connect to an SSL/TLS protected web site and report back on the issues found. Before running any of the changes below see what overall rating you get and document the following:

  • Authentication section: record the signature algorithm. For the signature algorithm its possible the certificate authority signature will be marked “SHA1withRSA WEAK SIGNATURE”. This certificate, if rekeyed and issued again by your certificate authority might be replaced with a SHA-2 certificate. The Google Chrome browser from September 2014 will report sites secured with this SHA-1 certificate as not fully trustworthy based on the expiry date of the certificate. If your certificate expires after Jan 1st 2017 then get it rekeyed as soon as possible. As 2015 goes on, this date will move closer in time. From early 2015 this cut off date becomes June 1st 2016 and so on. Details on the dates for this impact are in http://googleonlinesecurity.blogspot.co.uk/2014/09/gradually-sunsetting-sha-1.html. You can also use https://shaaaaaaaaaaaaa.com/ to test your certificate if the site is public facing, and this website gives details on who is now issuing SHA-2 keyed certificates. You can examine your external servers for SHA-1 certificates and the impact in Chrome (and later IE and Firefox) at https://www.digicert.com/sha1-sunset/. To do the same internally, use the DigiCert Certificate Inspector at https://www.digicert.com/cert-inspector.htm.
  • Authentication section: record the path values. Ensure that each certificate is either in the trust store or sent by the server and not an extra download.
  • Configuration section: document the cipher suites that are provided by your server
  • Handshake simulation section: Here it will list browsers and other devices (mobile phones) and what their default cipher is. If you do not support the cipher they support then you cannot communicate. Note that you typically support more than one cipher and the client will often support more than one cipher to, so though it is shown here as a mismatch this does not mean that it will not work and if this client is used by your users then click the link for the client and ensure that the server offers at least one of the the ciphers required by the client – unless all the ciphers are insecure in which case do not use that client!

Once you have a document on your current configuration, and a list of the clients you need to support and the ciphers they need you to support, you can go about removing SSL v3 and insecure ciphers.

Disabling SSL v3 on the server

To disable SSL v3 on a Windows Server (2008 or later) you need to set the Enabled registry value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server” to 0. If this value does not exist, the create a DWORD value called “Enabled” and leave it at 0. You then need to reboot the server.

If you are using Windows 2008 R2 or earlier you should enable TLS v1.1 and v1.2 at the same time. Those versions of Windows Server support TLS v1.1 and v1.2 but it is not enabled (only TLS v1.0 is enabled). To enable TLS v1.1 and v1.2 use set the Enabled value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server” to 1. Change the path to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server” and the same setting to support TLS v1.2. If these keys do not exist, create them. It is also documented that the “DisabledByDefault” key is required, but I have seen this noted as being the same as the “Enabled” key – just the opposite value. Therefore as I have not actually checked, I set both Enabled to 1 and DisabledByDefault to 0.

To do both the disabling of SSL v2 and v3 (v2 can be enabled on older versions of Windows and should be disabled as well) I place the following in a .reg file and double click it on each server, followed by a reboot for it to take effect. This .reg file contents also disables the RC4 ciphers. These ciphers have been considered insecure for a few years and when I configure my servers not to support SSL v3 I also disable the RC4 ciphers as well.

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 128/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 40/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 56/128]

"Enabled"=dword:00000000

Then I use the following .reg file to enabled TLS v1.1 and TLS v1.2

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

Once you have applied both of the above sets of registry keys you can reboot the server at your convenience. Note that the regkeys may set values that are already set, for example TLS v1.1 and v1.2 are enabled on Exchange 2013 CAS servers and SSL v2 is disabled. For example the first of the below graphics comes from a test environment of mine that is running Windows Server 2012 R2 without any of the above registry keys set on them. You can see that Windows Server 2012 R2 is vulnerable to the POODLE attack and supports the RC4 cipher which is weak.

image

The F grade comes from patched but un-configured with regards to SSL Windows Server 2008 R2 server

image

After setting the above registry keys and rebooting, the test at https://www.ssllabs.com/ssltest then showed the following for 2012 R2 on the left (A grade) and Windows Server 2008 R2 on the right (A- grade):

image image

Disabling SSL v3 on a Kemp LoadMaster load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a Kemp load balancer you need to be running version 7.1-20b to be able to do the following, and to ensure that the SSL code on the load balancer is not susceptible to issues such as heartbleed as well. To configure your load balancer to disable SSL v3 you need to modify the SSL properties of the virtual server and check the “Support TLS Only” option.

To disable the RC4 weak ciphers then there are a few choices, but the easiest I have seen to do is to select “Perfect Forward Secrecy Only” under Selection Filters and then add all the listed filters. Then from this list remove the three RC4 ciphers that are in the list.

If you do not select “Support TLS Only” and leave the ciphers at the default level then your load balancer will get an C grade at the test at https://www.ssllabs.com/ssltest because it is vulnerable to the POODLE attack. Setting just the “Support TLS Only” option and leaving the default ciphers in place will result in a B grade, as RC4 is still supported. Removing the RC4 ciphers (by following the instructions above to add the perfect forward secrecy ciphers and remove the RC4 ciphers from this list) as well as allowing only the TLS protocol will result in an A grade.

image

Kemp 7.1-22b does not support SSL v3 for the API and web interface as well as completing the above to protect the virtual services that the load balancer offers.

Kemp Technologies document the above steps at https://support.kemptechnologies.com/hc/en-us/articles/201995869, and point out the unobvious setting that if you filter the cipher list with the “TLS 1.x Ciphers Only” setting then it will only show you the TLS 1.2 ciphers and not any TLS 1.1 or TLS 1.0 ciphers. THerefore selecting “TLS 1.x Ciphers Only” rather than filtering using “Perfect Forward Secrecy Only” will result in a reduced client list, which may be an issue.

I was able to achieve an A grade on the SSL Labs test site. My certificate uses SHA-1, but expires in 2015 so by the time SHA-1 is reported an issue in the browser I will have changed it anyway.

image

Disabling SSL v3 on a JetNexus ALB-X load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a JetNexus ALB-X load balancer you need to be running build 1553 or later. Build 1553 is a version 3 build, so any version 4 build is of a higher, and therefore valid build. This build (version 3.54.3) or later is needed to ensure Heartbleed mitigation and to allow the following configuration changes to be applied.

To configure the JetNexus  you need to upload a config file to turn off SSL v3 and RC4 ciphers. The config file is .txt file that is uploaded to the load balancer. In version 4, the primary cluster node can have the file uploaded to it, and the changes are replicated to the second node in the cluster automatically.

Before you upload a config file to make the changes required, ensure that you backup the current configuration from Advanced >> Update Software and click the button next to Download Current Configuration to save the configuration locally. Ensure you backup all nodes in a v4 cluster is appropriate.

Then select one of the three config file settings below and copy it to a text file and upload it from Advanced >> Update Software and use the Upload New Configuration option to install the file. The upload will reset all connections, do do this at during a quiet period of time.

The three configs are to reset the default ciphers, to disable SSL v3 and RC4, and to disable TLS v1.0 and SSL v3 and RC4

JetNexus protocol and cipher defaults:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH"

Cipher1=""

Cipher2=""

CipherOptions="CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable TLS v1.0, SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,NO_TLSv1,CIPHER_SERVER_PREFERENCE"

On my test environment I was able to achieve an A- grade with the SSL Test website and the config to disable TLS 1.0, SSL3 and RC4 enabled. The A- is because of a lack of support for Forward Secrecy with the reference browsers used by the test site.

image

Browsers and Other Clients

There too much to discuss with regards to clients, apart from they need to support the same ciphers as mentioned above. A good guide to clients can be found at https://www.howsmyssl.com/s/about.html and from there you can test your client as well.

Additional comment 23/1/15 : One important comment to make though comes courtesy of Ingo Gegenwarth at https://ingogegenwarth.wordpress.com/2015/01/20/hardening-ssltls-and-outlook-for-mac/. This post discusses the TLS Renegotiation Indication Extension update at RFC 5746. It is possible to use the AllowInsecureRenegoClients registry key at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL to ensure that only clients with the update mentioned at http://support.microsoft.com/kb/980436 are allowed to connect. If this is enabled (set to Strict Mode) and the above to disable SSL 2 and 3 is done then Outlook for Mac clients cannot connect to your Exchange Server. If this regkey is deleted or has a non-zero value then connections to SSL 2 and 3 can be made, but only for a renegotiation to TLS. Therefore ensure that you allow Compatibility Mode (which is the default) when you disable SSL 2 and 3, as Outlook for Mac and Outlook for Mac for Office 365 both require SSL support to then be able to start a TLS session.

Windows RRAS VPN and Multi Factor Authentication

$
0
0

This blog post covers the steps to add Multi Factor Authentication (MFA) to Windows RRAS server. Once this is enabled, and you sign in with a user enabled for MFA in Azure Multi-Factor Authentication Server (an on-premises server) you are required to answer your phone before you can connect over the VPN. That is, you connect to the VPN endpoint, enter your username and password and if they are correct, then confirm that you want to authenticate by answering your phone. If you are not connecting over VPN and someone else is and using your credentials, unless they also have your phone they are not going to succeed! And all this for less than a £1 per user per month!

This configuration requires the following components set up:

  • Multi Factor Authentication set up in Azure
  • Azure Multi-Factor Authentication Server installed on-premises
  • Some users configured in Azure Multi-Factor Authentication Server
  • RRAS VPN server configured to use RADIUS for authentication, with the MFA server being the RADIUS endpoint

Step 1: MFA setup in Microsoft Azure

To do this you need an Azure subscription and DirSync configured to populate the Azure Active Directory with users. If you already have Office 365 with DirSync then you have this configuration already and you can login to Azure using the Azure AD link from the Office 365 management portal.

Once in Azure select “Active Directory” from the portal and click “Multi-Factor Auth Providers” from the menu at the top. You will probably not have any providers listed here, but if you do already (for example you are already using MFA for Office 365 or AD FS) then you can use the existing provider. To add a provider click Add, select “Multi-Factor Auth Provider” and “Quick Create” as shown:

image

Provide a name and then choose a usage model. Usage models are per user or per authentication. Per User works when a single user will authenticate more than 10 times a month. When users would only use MFA occasionally you can buy the service by the authentication request. For example if you had 200 VPN users who connected each day, you would choose Per User. But if you had 200 VPN users, who only dialled in once a month (i.e. a total of 200 authentications) then you would be better off buying the Per Authentication model as you would pay for 20 batches of authentications (each batch allows 10 authentications regardless of the user). You cannot change the authentication model without removing the auth provider and making a new one.

Finally, link the provider to your directory.

Select your auth provider once it is created and click Manage at the bottom of the portal:

image

This opens a new tab in the browser and takes you to the Azure Multi-Factor Authentication management pages.

Whilst here, as there is actually not a lot to do here, take a look at Configure to see what settings you can change. Maybe enter your email address for the fraud alert notifications, but leave everything else as is for now.

Back on the home page of the Azure Multi-Factor Authentication web site, click Downloads.

Step 2: Installing Multi-Factor Authentication Server

From the Downloads page find the small download link (above the Generate Activation Credentials button) and download the software to a Windows Server that is joined to your domain.

On the said server install .NET 2.0 and IIS with the default settings. Ensure that you have a digital certificate installed, as the web site the the users will go to for provisioning and managing their device is available over SSL. Mobile phones can use the app to validate connections as well, and that will be the subject of a different blog post, but you need a trusted cert that is valid and has a subject name such as mfa.domain.com (where domain.com is your domain) and so a 3rd party cert is required. In this blog I have used my wildcard cert from DigiCert.

Run the Multi-Factor Authentication Server installer and proceed through the steps. Use the wizard to configure the server and select VPN. During the installation you will also need to authenticate the Multi-Factor Authentication Server to Azure. This requires a set of credentials that are valid for ten minutes at a time, and generated from the Generate Activation Credentials button in the management web page at Azure. So don’t click this button until the Multi-Factor Authentication Server requires this info.

For this blog I am going to protect my VPN with Azure MFA. Therefore during the configuration wizard I select just the VPN option:

image

As you proceed through the wizard you will be asked about the RADIUS client configuration needed for your VPN provider. In here enter the IP address of your RRAS box and a password that you have made up for the occasion. You will need this password, or shared secret, when configuring the RRAS server later.

image

Finish the installation of Multi-Factor Authentication Server.

Once complete, open the Multi-Factor Authentication Server management program and select RADIUS Authentication. Ensure Enable RADIUS authentication is selected as this will allow this server to provide authentication on behalf of the RADIUS client and therefore insert requests for MFA via the users phone into the authentication flow.

image

Double click the IP address of your VPN server and select “Require User Match”

Step 3: Configure Users for MFA

Click the Users icon in Multi-Factor Authentication Server and click Import from Active Directory. Set the filtering to add just the users you want to enable MFA for. A user who dials in who is not listed here will not be blocked from authentication to the VPN.

image

A user will have a yellow warning icon next to it if it is disabled. For disabled users you can either allow authentication to pass through the MFA server without requiring the user to have the second factor of authentication working. This can be set on the users properties, and the Advanced tab by selecting Succeed Authentication for “When user is disabled”. The enabled check box is on the general tab.

If a user is enabled here then they will need to either complete the MFA authentication process. The exact process the user needs to do to pass the authentication process always starts with getting their username and password correct. After that they can do one of the following:

  • Press # when the call comes through to their phone
  • Reply to a text message – texts go to a US number, so this might cost the user international rates!
  • Press the Verify button on the MFA app on their phone
  • Optionally add a PIN number to any of the above – for example, when the MFA call comes through to enter your PIN and then press # rather than just #.

Each user can have different settings. When you import users from the Active Directory it reads (by default) their mobile number from the Active Directory as the primary number to authenticate against. You can set backup numbers if required. If a user has a mobile number they are enabled by default. When importing you can set which MFA method the user will use, and you can install the MFA portal so the user can change their own settings if you want (outside the scope of this blog).

By now you have Azure MFA configured, the MFA server installed on-premises (it will need port 443 access to Azure to complete the authentication) and users set up in the MFA server. The MFA server is also configured to act as a RADIUS endpoint for your VPN service. If you install more than one MFA server for load balancing and HA, ensure that each MFA server is selected on the Multi-Factor Auth Servers tab on the RADIUS settings – this starts the MFA RADIUS service on each selected machine.

Before you configure VPN, final step here is to test the user. From the Users area on the MFA server select a user and click Test. Authenticate as the user, username and password required for this test, and then press # after answering the phone. Try out the SMS or text message form factor for authentication as well. To support the mobile app you need to install the users portal, the SDK and the mobile app web service – so thats for a different blog post.

Step 4: Configure RRAS VPN to Use Multi-Factor Authentication

Finally, change to your RRAS server. Before going any further, ensure that RRAS is working before MFA is enabled – you don’t want to troubleshoot MFA only to find it was RRAS not working in the first place! The RRAS server’s IP address must match the IP address listed under the RADIUS configuration in the MFA server.

Right-click the RRAS server name in the Routing and Remote Access console. If you are setting up MFA for another type of VPN server then any that supports RADIUS will do. In the server properties, select the Security tab and change the Authentication provider to RADIUS Authentication (it was probably Windows Authentication).

image

Click Configure to the right of this drop-down and click Add:

image

Enter the IP address of your MFA server, repeating the Add process if you have more than one MFA server configured. Enter the shared secret that you used when setting up the MFA server and ensure that the timeout is set to 60 seconds. This is an important setting. When the user connects to the VPN server, the timeout needs to exceed the time it will take for the users phone to ring, listen the the greeting, enter the PIN (optionally) and press #. One minute should be enough to do this. After one minute the RRAS VPN server will automatically fail authentication, so the user has one minute to complete the second factor authentication on their phone.

You should now be able to dial into your VPN and authenticate with your username and password. Once you succeed with this, the MFA authentication starts and the call will arrive on your phone:

image

You can get the graphic as a vCard from http://1drv.ms/1xXCA01. Download this vCard, save it to your contacts and when you sync your contacts to your phone, your phone will tell you the Microsoft Phone Auth service is calling. You could change the name and graphic to suit, just make sure the number matches the CallerID setting in Azure MFA.

Whilst you are waiting for the call the arrive, and before you accept the auth request, the VPN client appears to pause:

image

Once you complete the auth, the VPN session starts up. If the call and time to answer exceeds 60 seconds, then consider increasing the RADIUS timeout on the VPN server.

Finally, and this will be a different blog post, you might want to offer the user a portal they can go to to change their settings such as updating phone number and changing mode of authentication etc. But this is off topic for this post. Later posts will cover using this MFA server integrated with AD FS and OWA as well.

Exchange OWA and Multi-Factor Authentication

$
0
0

Multi-factor authentication (MFA), that is the need to have a username, password and something else to pass authentication is possible with on-premises servers using a service from Windows Azure and the Multi-Factor Authentication Server (an on-premises piece of software).

The Multi-Factor Authentication Server intercepts login request to OWA, if the request is valid (that is the username and password work) then the mobile phone of the user is called or texted (or an app starts automatically on the phone) and the user validates their login. This is typically done by pressing # (if a phone call) or clicking Verify in the app, but can require the entry of a PIN as well.

To configure Multi-Factor Authentication Server for OWA you need to complete the following steps:

Some of these steps are the same regardless of which service you are adding MFA to and some slightly different. I wrote a blog on MFA and VPN at http://www.c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication and this contains the general setup steps and so these are not repeated here. Just what you need to do differently

Step 1

See http://www.c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication

Step 2: Install MFA Server on-premises

This is covered in http://www.c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication, but the difference with OWA is that it needs to be installed on the Exchange CAS server where the authentication takes place.

Ensure you have .NET 3.5 installed via Server Manager > Features. This will install the .NET 2.0 feature that is required by MFA server. If the installation of the download fails, this is the most likely reason for the failure, so install .NET 3.5 and then try the MFA Server install again.

The install of the MFA server does not take very long. After a few minutes the install will complete and then you need to run the Multi-Factor Authentication Server admin tools. These are on the Start Screen in More Apps or the Start Menu. Note that it will start the software itself if given time:

image

image

Do not skip the wizard, but click Next. You will be asked to activate the server. Activating the server is linking it to your Azure MFA instance. The email address and password you need are obtained from the Azure multi-factor auth provider that was configured in Step 1. Click the Generate Activation Credentials on the Downloads page of the Azure MFA provider auth management page.

image

The credentials are valid for ten minutes, so your will differ from mine. Enter them into the MFA Server configuration wizard and click Next.

MFA Server will attempt to reach Azure over TCP 443.

Select the group of servers that the configuration should replicate around. For example, if you where installing this software on each Exchange CAS server, then you might enter “Exchange Servers” as the group name in the first install and then select it during the install on the remaining servers. This config will be shared amongst all servers with the same group name. If you already have a config set up with users in it and set up a new group here, then it will be different settings for the users. For example you might have a phone call to authenticate a VPN connection but use the app for OWA logins. This would require two configs and different groups of servers. If you want the same settings for all users in the entire company, then one group (the default group) should be configured.

image

Next choose if you want to replicate your settings. If you have more than one MFA Server instance in the same group select yes.

Then choose what you want to authenticate. Here I have chosen OWA:

image

Then I need to choose the type of authentication I have in place. In my OWA installation I am using the default of Forms Based Authentication, but if you select Forms-based authentication here, the example URL for forms based authentication shown on the next page is from Exchange Server 2003 (not 2007 or later). Therefore I select HTTP authentication

Next I need to provide the URL to OWA. I can get this by browsing the OWA site over https. The MFA install will also use HTTPS, so you will need a certificate and have this trusted by a third party if you want to support user managed devices. Users managing their own MFA settings (such as telephone numbers and form of authentication) reduces the support requirement. That needs the User Portal, the SDK and the Mobile App webservice installed as well. These are outside the scope of this blog. For here I am going to use https://servername/owa.

image

Finish the installation at this time and wait for the admin application to appear.

Step 3: Configure Users for MFA

Here we need to import the users who will be authenticated with MFA. Select the Users area and click Import from Active Directory. Browse the settings to imports group members, or OUs or a search to add your user account. Once you have it working for yourself, add others. Users not listed here will not see any change in their authentication method.

Ensure that your test user has a mobile number imported from the Active Directory. If not add one, choosing the correct country code as well. The default authentication for the user is that they will get a phone call to this number and need to press # before they can be logged in. Ensure that the user is set to Enabled as well in the users area of the management program.

Step 4: Configure OWA for MFA (additional steps)

On the IIS Authentication node you can adjust the default configuration for HTTP. Here you need to set Require Multi-Factor Authentication user match. This ensures that each auth attempt is matched to a user in the users list. If the user exists and is enabled, then do MFA for them. If disabled, then the setting for Succeed Authentication on the advanced tab comes into play. If the user is not listed, authentication passes through without MFA.

image

Change to the Native Module tab and select OWA under Default Web Site only. Do not set authentication on the Backend Web Site. Also enable the native module on ECP on the Default Web Site as well:

image

Then I can attempt a login to OWA or ECP. Once I successfully authenticate my phone rings and I am prompted to press #. Once I press # I am allowed into Exchange!

Viewing all 187 articles
Browse latest View live