Saturday, September 28, 2013

Windows Azure and Cloud Computing Posts for 9/23/2013+

Top Stories This Week:

A compendium of Windows Azure, Service Bus, BizTalk Services, Access Control, Caching, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1_thumb1_thumb_thu

‡ Updated 9/28/2013 with new articles marked
• Updated
9/26/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Windows Azure Blob, Drive, Table, Queue, HDInsight and Media Services

‡ Tyler Doerksen (@tyler_gd) posted Quick Script: Copy All Blobs to New Storage Account Using PowerShell on 9/27/2013:

imageHere is a quick PowerShell script to copy blobs from different storage accounts on different subscriptions.

Import-Module Azure

$sourceAccount = 'myvids'
$sourceKey = '@@@@@@@'

$destAccount = 'destvids'
$destKey = '@@@@@@@'

$containerName = 'videos'

$sourceContext = New-AzureStorageContext $sourceAccount $sourceKey
$destContext = New-AzureStorageContext $destAccount $destKey

$blobs = Get-AzureStorageBlob `
    -Context $sourceContext `
    -Container $containerName

$copiedBlobs = $blobs |
    Start-AzureStorageBlobCopy `
        -DestContext $destContext `
        -DestContainer $containerName `
        -Verbose 

$copiedBlobs | Get-AzureStorageBlobCopyState

That will copy all of the blobs in the source container to the destination container.


Ricardo Villalobos (@ricvilla) posted Windows Azure Insider September 2013 – Hadoop and HDInsight: Big Data in Windows Azure on 9/25/2013:

dn385701_cover_lrg(en-us,MSDN_10)For the September edition of the Windows Azure Insider MSDN magazine column, Bruno and I write about Big Data, the benefits of the MapReduce model, and HDInsight, the Windows Azure component that offers Hadoop-as-a-Service in the public cloud. We also show how to perform simple analytics against a public dataset using Java code and and imageHive.

The full article can be found here.

Enjoy!


Mariano Converti (@mconverti) reported New Windows Azure Media Services (WAMS) Asset Replicator release published on CodePlex in a 9/24/2013 post:

imageLast week, a new release of the Windows Azure Media Services (WAMS) Asset Replicator Tool was published on CodePlex. This September 2013 release includes the following changes:

  • Code upgraded to use the latest Windows Azure Storage Client library (v2.1.0.0)
  • Code upgraded to use the latest Windows Azure Media Services .NET SDK (v2.4.0.0)
  • C# projects upgraded to target .NET Framework 4.5
  • Cloud Service project upgraded to Windows Azure Tools v2.0
  • NuGet package dependencies updated to their latest versions
  • Support added to compare, replicate and verify FragBlob assets
  • New approach to auto-replicate and auto-ignore assets based on metadata in the IAsset.AlternateId property using JSON format

As you can see, the most important changes are the last two items which I will go into more detail about below.

FragBlob support

imageFragBlob is a new storage format that will be used in an upcoming Windows Azure Media Services feature that is not yet available. In this new format, each Smooth Streaming fragment is written to storage as a separate blob in the asset’s container instead of grouping them together into Smooth Streaming PIFF files (ISMV’s/ISMA’s). Therefore, the Replicator has been updated to identify, compare, copy and verify this new FragBlob asset type.

FragBlob asset container

Replicator metadata in IAsset.AlternateId property using JSON format

To decide whether or not an asset should be automatically replicated or ignored, the Replicator Tool needs to get some metadata from your assets. Currently the IAsset interface does not have a property to store custom metadata, so as a workaround the Replicator now uses the IAsset.AlternateId string property to store this metadata with a specific JSON format described below:

{
   "alternateId":"my-custom-alternate-id",
   "replicate":"No",
   "data":"optional custom metadata"
} 

The following are the expected fields in the JSON format:

  • alternateId: this is the actual Alternate Id value for the asset that is used to identify and track assets in both data centers.
  • replicate: this is a three-state flag that the replicator will use to determine whether or not it should take automatic action for the asset. The possible values are:
    • No: the asset will be automatically ignored
    • Auto: the asset will be automatically replicated
    • Manual: no automatic action will be taken for this asset

    Important: If the replicate field is not included in the IAsset.AlternateId (or if this property is not set at all – null value), the default value is No (asset automatically ignored).

  • data: this is an optional field that you can use to store additional custom metadata for the asset.

The Replicator uses some extension methods for the IAsset interface to easily retrieve and set these values without having to deal with the JSON format. These extensions can be found in the Replicator.Core\Extensions\AssetUtilities.cs source code file.

IAsset.AlternateId extensions for Replicator JSON format

Using these IAsset extension methods for the IAsset.AlternateId property, you can easily set the Replicator metadata in your media workflows as explained below:

How to set the IAsset.AlternateId metadata for automatic replication

Once your asset is ready (for instance, after ingestion or a transcoding job) and you have successfully created an Origin locator for it, you need to set the alternateId and replicate fields as follows:

// Set the alternateId to track the asset in both WAMS accounts.
string alternateId = "my-custom-id";
asset.SetAlternateId(alternateId);
 
// Set the replicate flag to 'Auto' for automatic replication.
asset.SetReplicateFlag(ReplicateFlag.Auto);
 
// Update the asset to save the changes in the IAsset.AlternateId property.
asset.Update(); 

By setting the replicate field to ‘Auto‘, the Replicator will un-ignore the asset and automatically start copying it to the other WAMS account. When the copy operation is complete, both assets will be marked as verified if everything measures up OK; otherwise, it will report the differences/errors and the user will have to take manual action from the Replicator Dashboard (like manually forcing the copy again).

How to set the IAsset.AlternateId metadata for manual replication

Once your asset is ready and you have successfully created an Origin locator for it, you need to set the alternateId and replicate fields as follows:

// Set the alternateId to track the asset in both WAMS accounts.
// Make sure to use the same alternateId for the asset in the other WAMS account that you want to compare.
string alternateId = "my-custom-id";
asset.SetAlternateId(alternateId);
 
// Set the replicate flag to 'Manual' for manual replication.
asset.SetReplicateFlag(ReplicateFlag.Manual);
 
// Update the asset to save the changes in the IAsset.AlternateId property.
asset.Update(); 

By setting the replicate field to ‘Manual‘, the Replicator will un-ignore the asset and check if there is an asset in the other WASM account with the same alternateId field. If the Replicator finds one, it will compare both assets and mark them as verified if everything checks out OK; otherwise, it will report the differences and the user will have to take manual action from the Replicator Dashboard (like deleting one and forcing a copy of the other). This scenario is useful when comparing assets living in different WAMS accounts and generated from the same source.


Michael Collier (@MichaelCollier) described a PowerShell script to compute the Billable Size of Windows Azure Blobs in a 9/23/2013 post:

imageI recently came across a PowerShell script that I think will be very handy for many Windows Azure users. The script calculates the billable size of Windows Azure blobs in a container, or the entire storage account. You can get the script at http://gallery.technet.microsoft.com/Get-Billable-Size-of-32175802.

Let’s walk through using this script:

0. Prerequisites

  • image_thumb75_thumb3_thumb_thumb_thu[10]Windows Azure subscription. If you have MSDN, you can activate your Windows Azure benefits at http://bit.ly/140uAMt
  • Windows Azure storage account
  • Windows Azure PowerShell cmdlets (download and configure)

msdn-azure-banner-728x90

1. Select Your Windows Azure Subscription

Select-AzureSubscription -SubscriptionName "MySubscription"

2. Update PowerShell Execution Policy
You should only need to do this if your PowerShell execution policy prohibits running unsigned scripts. More on execution policy.

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

image3. Calculate Blob Size for an Entire Storage Account

.\CalculateBlobCost.ps1 -StorageAccountName mystorageaccountname

VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.SqlDatabase.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.ServiceManagement.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.Storage.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.dll’.
VERBOSE: 12:16:39 PM – Begin Operation: Get-AzureStorageAccount
VERBOSE: 12:16:42 PM – Completed Operation: Get-AzureStorageAccount
VERBOSE: 12:16:42 PM – Begin Operation: Get-AzureStorageKey
VERBOSE: 12:16:45 PM – Completed Operation: Get-AzureStorageKey
VERBOSE: Container ‘deployments’ with 4 blobs has a size of 15.01MB.
VERBOSE: Container ‘guestbook’ with 4 blobs has a size of 0.00MB.
VERBOSE: Container ‘mydeployments’ with 1 blobs has a size of 12.55MB.
VERBOSE: Container ‘test123′ with 1 blobs has a size of 0.00MB.
VERBOSE: Container ‘vsdeploy’ with 0 blobs has a size of 0.00MB.
VERBOSE: Container ‘wad-control-container’ with 19 blobs has a size of 0.00MB.
VERBOSE: Container ‘wad-iis-logfiles’ with 15 blobs has a size of 0.01MB.
Total size calculated for 7 containers is 0.03GB.

4. Calculate Blob Size for a Specific Container within a Storage Account

.\CalculateBlobCost.ps1 -StorageAccountName mystorageaccountname `
-ContainerName deployments

VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.SqlDatabase.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.ServiceManagement.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.Storage.dll’.
VERBOSE: Loading module from path ‘C:\Program Files (x86)\Microsoft SDKs\Windows
Azure\PowerShell\Azure\.\Microsoft.WindowsAzure.Management.dll’.
VERBOSE: 12:12:48 PM – Begin Operation: Get-AzureStorageAccount
VERBOSE: 12:12:52 PM – Completed Operation: Get-AzureStorageAccount
VERBOSE: 12:12:52 PM – Begin Operation: Get-AzureStorageKey
VERBOSE: 12:12:54 PM – Completed Operation: Get-AzureStorageKey
VERBOSE: Container ‘deployments’ with 4 blobs has a size of 15.01MB.
Total size calculated for 1 containers is 0.01GB.

5. Calculate the Cost

The Windows Azure pricing calculator page should open immediately after the script executes. From there you can adjust the slider to the desired storage size, and view the standard price. The current price is $0.095 per GB for geo-redundant storage.  So this one storage account is costing me only $0.0027 per month. I can handle that.

<Return to section navigation list>

image_thumb1_thumb_thumb_thumb_thumb

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

‡ David Pallman posted Getting Started with Mobility, Part 10: A Back-end in the Cloud with Windows Azure Mobile Services on 9/28/2013:

imageIn this series of posts, we're looking at how to get started as a mobile developer. In Parts 1-9, we examined a variety of mobile platforms and client app development approaches (native, hybrid, web). We've seen a lot so far, but that's only the front-end; typically, a mobile app also needs a back-end. We'll now start looking at various approaches to providing a back-end for your mobile app. Here in Part 10 we'll look at Microsoft's cloud-based Mobile Backend As A Service (MBAAS) offering, Windows Azure Mobile Services.

About Mobile Back-end as a Service (MBaaS) Offerings
To create a back-end for your mobile app(s), you're typically going to care about the following:

  • A service layer, where you can put server-side logic
  • Persistent data storage
  • Authentication
  • Push notifications

You could create a back-end for the above using many different platforms and technologies, and you could do so in a traditional data center or in a public cloud. You'd need to write a set of web services, create a database or data store of some kind, provide a security mechanism, and so on.
What's interesting is that today you can make a "build or buy" decision about your mobile back-end: several vendors and open source groups have decided to offer all of the above as a ready-to-use, out-of-box service. Microsoft's Windows Azure Mobile Services is an example of this. Of course, it doesn't do all of your work for you--you're still going to be responsible for supplying a data model and server-side logic. Nevertheless, MBaaS gives you a huge head start. MBaaS is especially valuable if you are mostly a mobile developer who wants to focus their time on the app and not a [backend implementation.]

Windows Azure Mobile Services
Windows Azure Mobile Services "provides a scalable cloud backend for building Windows Store, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications. Store data in the cloud, authenticate users, and send push notifications to your application within minutes." Specifically, that means you get the following:

  • Authentication (to Facebook, Twitter, Microsoft, or Google accounts)
  • Scripting for server-side logic
  • Push notifications
  • Logging
  • Data storage
  • Diagnostics
  • Scalability
The service will also generate for you starter mobile clients for iOS, Android, Windows Phone, Windows 8, or HTML5. You can use these apps as your starting point, or as references for seeing how to hook up your own apps to connect to the service.

Pricing
So what does all this back-end goodness cost? At the time of this writing, there are Free, Standard ($25/month), and Premium ($199/month) tiers of pricing. You can read the pricing details here.

Training
We reference training resources throughout this post. A good place to start, though, is here:

Get Started with Mobile Services
Android iOS Windows Phone Windows 8 HTML5

Provisioning a Mobile Service
The first thing you'll notice about WAMS is the care that's been given to the developer experience, especially your first-time experience. Once you have a Windows Azure account, you'll go to azure.com, sign-in to the management portal, and navigate to the Mobile Services tab. From there, you're only a handful of clicks away from rapid provisioning of a mobile back-end.

1 Kick-off Provisioning
Click New > Mobile Service > Create to begin provisioning a mobile service.

Provisioning a Mobile Service

2 Define a Unique Name and Select a Database
On the first provisioning screen, you'll choose an endpoint name for your service, and either create a database or attach to one you're previously created in the cloud. The service offers a free 20MB SQL database. You'll also indicate which data center to allocate the service in (there are 8 worldwide, 4 in the U.S.)

Provisioning a Mobile Service - Screen 1

Provisioning a Mobile Service - Screen 2

3 Wait for Provisioning to Complete
Click the Checkmark button, and provisioning will commence. It's fast! In less than a minute your service will have been created.

Newly-provisioned Mobile Service Listed in Portal

4 Use the New Mobile Service Wizard
Click on your service to set it up. You'll be greeted with a wizard that walks you through. This is especially helpful if this is your first time using Windows Azure Mobile Services. On the first screen, you'll indicate which mobile platform you are targeting: Windows Store (Windows 8), Windows Phone 8, iOS, Android, or HTML5 (don't worry, you're not restricted to a single mobile platform and can come back and change this setting as often you wish).

Setup Wizard, Screen 1

5 Generate a Mobile App that Uses your Service
Next, you can download an automatically generated app for the platform you've selected, pre-wired up to talk to the service you just provisioned. To do so, click the Create a New App link. This will walk you through 1) installing the SDK you need for your mobile project, 2) creating a database table, and 3) downloading and running your app. The app and database will initially be for a ToDo database, but you can amend the database and app to your liking once you're done with the wizard.

Generating a Mobile Client App for Android

In the next section, we'll review how to build and run the app that you generated, and how to view what's happening on the back end.

Building and Running a Generated Mobile App
Let's walk through building and running the To Do app the portal auto-generates for you. The mobile client download is a zip file, which you should save locally, Unblock, and extract to a local folder. Next, you can open the project and run it--it's that simple to get started.

Running the App
When you run the app, you'll see a simple ToDo app--one that is live, and uses your back-end in the cloud for data storage. Run the app and kick the tires by adding some tasks. Enter a task by entering it's name and clicking Add. Delete an item by touching its checkbox.

To Do app running on Android phone

Viewing the Data
Now, back in the Windows Azure portal we can inspect the data that has been stored in the database in the cloud. Click on the Data link at the top of the Windows Azure portal for your mobile service, and you'll see what's in the ToDo table. It should match what you just entered using the mobile app.

Database Data in the Cloud

Dynamic Data
One of the great features of Windows Azure Mobile Services is its ability to dynamically adjust its data model. This allows you to change your mobile app's data structure in code, and the back-end database will automatically add new columns if it needs to--all by itself.

Get Started with Data in Mobile Services
Android iOS Windows Phone Windows 8 HTML5

Dynamic data is a great feature, but some people won't want it enabled, perhaps once you're all ready for production use. You can enable or disable the feature in the Configure page of the portal.

Server-side Logic
Windows Azure Mobile Services happens to use node.js, which means server-side logic is something you write in JavaScript.

Mobile Services Server Script Reference
You can have scripts associated with your database table(s), where operations like Insert, Update, Delete, or Read execute script code. You set up these up on the Data page of the portal for your mobile service.

Database Action Scripts

You can also set up scheduled scripts, which run on a schedule. On the Schedule page of the portal, click Create a Scheduled Job to define a scheduled job.

Creating a Scheduled Job

Once you've define a scheduled job, you can access it in the portal to enter script code and enable or disable the job.

Setting a Job's Script Code

Authentication
You can authenticate against Microsoft accounts, Facebook, Twitter, or Google. This involves registering your app for authentication and configuring Mobile Services; restricting database table permissions to authenticated users; and adding authentication to the app.

Get Started with Authentication
Android iOS Windows Phone Windows 8 HTML5

Push Notifications
Push notifications support is provided for each mobile platform. Tutorials acquaint you with the registration and code steps needed to implement push notifications for  each platform,

Get Started with Push Notifications
Android iOS Windows Phone Windows 8 HTML5

Summary
Windows Azure Mobile Services provides a fast and easy mobile back-end in the cloud. It offers the essential capabilities you need in a back end and supports the common mobile platforms. If you're comfortable expressing your server-side logic in node.js JavaScript, this is a compelling MBaaS to consider.

MBaaS is a new FLA (Five-Letter Acronym) AFAIK.


Bruno Terkaly (@brunoterkaly) produced a series of Windows Azure SQL Database (a.k.a., SQL Azure) hands-on-labs (HOLs) on 9/26/2013. Here are links:

A set of database-oriented posts
  1. imageHow to create a Windows Azure Storage Account
  2. How to Export an On-Premises SQL Server Database to Windows Azure Storage
  3. How to Migrate an On-Premises SQL Server 2012 Database to Windows Azure SQL Virtual Machine
  4. imageHow to Migrate an On-Premises SQL Server 2012 Database to Windows Azure SQL Database
  5. Setting up an Azure Virtual Machine For Developers with Visual Studio 2013 Ultimate and SQL Server 2012 Express

image_thumb18_thumb_thumb_thumb_thum


<Return to section navigation list>

Windows Azure Marketplace DataMarket, Cloud Numerics, Big Data and OData

Splunk (@splunk) now supports OData, according to a 9/27/2013 OData for Splunk announcement:

imageEver wanted to be able to access your Splunk data from Excel or Tableau? This app provides an OData (http://www.odata.org) interface to your saved searches, which you can easily connect to with Excel, Tableau and a myriad of other programs.

This application is currently under private access, and works with Splunk 4.2x and above. If you would like access, please contact us at devinfo@splunk.com, or use the contact links on the site.

Release Notes

Version: 0.5.3

  • Version: 0.5.3
  • All versions

Now works with Excel 2013.


Brian Benz (@bbenz) reported OData v4.0 approved as Committee Specification by the OASIS Open Data Protocol Technical Committee on 9/13/2013 (missed when posted):

imageMicrosoft Open Technologies, Inc is pleased to announce the approval and publication of OData Version 4.0 Committee Specification (CS) by the members of the OASIS Open Data Protocol (OData) Technical Committee. As we reported back in May, this brings OData 4.0 one step closer to becoming an OASIS Standard.

The Open Data Protocol (OData) uses REST-based data services to access and manipulate resources defined according to an Entity Data Model (EDM).

image_thumb8_thumb_thumb_thumb_thumbThe Committee Specification is published in three parts; Part 1: Protocol defines the core semantics and facilities of the protocol. Part 2: URL Conventions defines a set of rules for constructing URLs to identify the data and metadata exposed by an OData service as well as a set of reserved URL query string operators. Part 3: Common Schema Definition Language (CSDL) defines an XML representation of the entity data model exposed by an OData service.

The CS also includes schemas, ABNF components, Vocabulary Components and the OData Metadata Service Entity Model.

You can also download a zip file of the complete package of each specification and related files here.

Join the OData Community

Here are some resources for those of you interested in using or implementing the OData protocol or contributing to the OData standard:

Our congratulations to the OASIS OData Technical Committee on achieving this milestone! As always, we’re looking forward to continued collaboration with the community to develop OData into a formal standard through OASIS.


<Return to section navigation list>

Windows Azure Service Bus, BizTalk Services and Workflow

‡ Paolo Salvatore (@babosbird) described How to integrate Mobile Services with BizTalk Server via Service Bus on 9/27/2013:

Introduction

imageThis sample demonstrates how to integrate Windows Azure Mobile Service with line of business applications, running on-premises or in the cloud, via BizTalk Server 2013, Service Bus Brokered Messaging and Service Bus Relayed Messaging. The Access Control Service is used to authenticate Windows Azure Mobile Services against the Windows Azure Service Bus. In this scenario, BizTalk Server 2013 can run on-premises or in a Virtual Machine on Windows Azure.

Scenario

imageThis scenario extends the TodoList tutorial application. For more information, see the following resources:

imageA mobile service receives a new todo item in JSON format sent by an HTML5/JavaScript site, Windows Phone 8 or Windows Store app via HTTP POST method. The mobile service performs the following actions:

  • Authenticates the user against the Microsoft, Facebook, Twitter, Google identity providers using the OAuth open protocol. For more information on this topic, see Troubleshooting authentication issues in Azure Mobile Services by Carlos Figueira.
  • Validates the input data.
  • Uses the access token issued by the identity provider as a key to query via REST the authentication provider and retrieve the user name. For more information on this topic, see Getting user information on Azure Mobile Services by Carlos Figueira.
  • Retrieves the user address from BizTalk Server 2013 via Service Bus Relayed Messaging.
  • Saves the new item to the TodoItem table on the Windows Azure SQL Database of the mobile service.
  • Reads notification channels from the Channel table on the Windows Azure SQL Database of the mobile service.
  • Sends push notifications to Windows Store Apps using the Windows Push Notification Service (WNS).
  • Sends push notifications to Windows Phone 8 Apps using the Microsoft Push Notification Service (MPNS).
  • Sends a notification to BizTalk Server 2013 via Service Bus Brokered Messaging.
imageArchitecture

The following diagram shows the architecture of the solution.

Message Flow

  1. The client application (HTML5/JavaScript site, Windows Phone 8 app or Windows Store app) sends authentication credentials to the mobile service.
  2. The mobile service redirects the user to the page of the select authentication provider which validates the credentials (username and password) provided by the user and issues a security token.
  3. The mobile service returns its access token to the client application. The user sends a new todo item to the mobile service.
  4. The insert script for the TodoItem table handles the incoming call. The script validates the inbound data then invokes the authentication provider via REST using the request module (getUserName function).
  5. The script sends a request to the Access Control Service to acquire a security token necessary to be authenticated by the Service Bus Relay Service exposed by BizTalk Server via a WCF-BasicHttpRelay Receive Location. The mobile service uses the OAuth WRAP Protocol to acquire a security token from ACS (getAcsToken funtion). In particular, the server script sends a request to ACS using the https module. The request contains the following information:
    • wrap_name: the name of a service identity within the Access Control namespace of the Service Bus Relay Service (e.g. owner)
    • wrap_password:  the password of the service identity specified by the wrap_name parameter.
    • wrap_scope: this parameter contains the relying party application realm. In our case, it contains the http base address of the Service Bus Relay Service (e.g. http://paolosalvatori.servicebus.windows.net/)
    ACS issues and returns a security token. For more information on the OAuth WRAP Protocol, see How to: Request a Token from ACS via the OAuth WRAP Protocol. The insert script calls the getUserAddress function that performs the following actions:
    • Extracts the wrap_access_token from the security token issued by ACS.
    • Creates a SOAP envelope to invoke the Service Bus Relay Service. In particular, the Header contains a RelayAccessToken element which in turn contains the wrap_access_token returned by ACS in base64 format. The Body contains the payload for the call.
    • Uses the request module to send the SOAP envelope to the Service Bus Relay Service. The Service Bus Relay Service validates and remove the security token, then forwards the request to BizTalk Server that processes the request and returns a response message containing the user address. See below for more details on his use case.
  6. The insert script calls the insertItem function that inserts the new item in the TodoItem table.
  7. The insert script retrieves from the Channel table the channel URI of the Windows Phone 8 and Windows Store apps to which to send push a notification (sendPushNotification function)
  8. The sendPushNotification function sends push notifications.
  9. The insert script calls the sendMessageToServiceBus function that uses the azure module to send a notification to BizTalk Server via a Windows Azure Service Bus queue.
    Call BizTalk Server via Service Bus Relayed Messaging

    The following diagram shows how BizTalk Server is configured to receive and process request messages sent by a mobile service via Service Bus Relay Service using a two-way request-reply message exchange pattern.

    Message Flow

    1. The client application sends a new item to the mobile service.
    2. The insert script sends a request to the Access Control Service to acquire a security token necessary to be authenticated by the Service Bus Relay Service exposed by BizTalk Server via a WCF-BasicHttpRelay Receive Location. The mobile service uses the OAuth WRAP Protocol to acquire a security token from ACS (getAcsToken funtion). In particular, the server script sends a request to ACS using the https module. The request contains the following information:
      • wrap_name: the name of a service identity within the Access Control namespace of the Service Bus Relay Service (e.g. owner)
      • wrap_password:  the password of the service identity specified by the wrap_name parameter.
      • wrap_scope: this parameter contains the relying party application realm. In our case, it contains the http base address of the Service Bus Relay Service (e.g. http://paolosalvatori.servicebus.windows.net/).
    3. ACS issues and returns a security token. For more information on the OAuth WRAP Protocol, see How to: Request a Token from ACS via the OAuth WRAP Protocol.
    4. The insert script calls the getUserAddress function that performs the following actions:
      • Extracts the wrap_access_token from the security token issued by ACS.
      • Creates a SOAP envelope to invoke the Service Bus Relay Service. In particular, the Header contains a RelayAccessToken element which in turn contains the wrap_access_token returned by ACS in base64 format. The Body contains the payload for the call.
      • Uses the request module to send the SOAP envelope to the Service Bus Relay Service. The Service Bus Relay Service validates and remove the security token, then forwards the request to BizTalk Server that processes the request and returns a response message containing the user address. See below for more details on his use case.
    5. The Service Bus Relay Service validates and remove the security token, then forwards the request to one the WCF-BasicHttpRelay Receive Location exposed by BizTalk Server.
    6. The WCF-BasicHttpRelay Receive Location publishes the request message to the BizTalkServerMsgBoxDb.
    7. The message triggers the execution of a new instance of the GetUserAddress orchestration. 
    8. The orchestration uses the user id contained in the request message to retrieve his/her address. For demo purpose, the orchestration generates a random address. The orchestration writes the response message to the BizTalkServerMsgBoxDb.
    9. The WCF-BasicHttpRelay Receive Location retrieves the message from the BizTalkServerMsgBoxDb.
    10. The receive location sends the response message back to the Service Bus Relay Service.
    11. The Service Bus Relay Service forwards the message to the mobile service.
    12. The mobile service saves the new item in the TodoItem table and sends the enriched item back to the client application.

    Call BizTalk Server via Service Bus Brokered Messaging

    The following diagram shows how BizTalk Server is configured to receive and process request messages sent by a mobile service via a Service Bus queue using a one-way message exchange pattern.

    Message Flow

    1. The client application sends a new item to the mobile service.
    2. The insert script calls the sendMessageToServiceBus function that performs the following actions: 
      • Creates a XML message using the xmlbuilder module.
      • Uses the azure to send the message to Windows Azure Service Bus queue.
    3. BizTalk Server 2013 uses a SB-Messaging Receive Location to retrieve the message from the queue.
    4. The WCF-BasicHttpRelay Receive Location publishes the notification message to the BizTalkServerMsgBoxDb.
    5. The message triggers the execution of a new instance of the StoreTodoItem orchestration.
    6. The orchestration elaborates and transforms the incoming message and publish a new message to the BizTalkServerMsgBoxDb.
    7. The message is consumed by a FILE Send Port.
    8. The FILE Send Port writes the message to the Out folder.
    Prerequisites
    Building the Sample

    Proceed as follows to set up the solution.

    Create the Todo Mobile Service

    Follow the steps in the tutorial to create the Todo mobile service.

    1. Log into the Management Portal.
    2. At the bottom of the navigation pane, click +NEW.

    3. Expand Mobile Service, then click Create.

      This displays the New Mobile Service dialog.

    4. In the Create a mobile service page, type a subdomain name for the new mobile service in the URL textbox and wait for name verification. Once name verification completes, click the right arrow button to go to the next page.

      This displays the Specify database settings page.

      Note: As part of this tutorial, you create a new SQL Database instance and server. You can reuse this new database and administer it as you would any other SQL Database instance. If you already have a database in the same region as the new mobile service, you can instead choose Use existing Database and then select that database. The use of a database in a different region is not recommended because of additional bandwidth costs and higher latencies.

    5. In Name, type the name of the new database, then type Login name, which is the administrator login name for the new SQL Database server, type and confirm the password, and click the check button to complete the process.

    Configure the application to authenticate users

    This solution requires the user to be authenticated by an identity providers. Follow the instructions contained in the links below to configure the Mobile Service to authenticate users against one or more identity providers and follow the steps to register your app with that provider:

    For more information, see:

    For your convenience, here you find the steps to configure the application to authanticate users using a Microsoft Account login.

    1. Log on to the Windows Azure Management Portal, click Mobile Services, and then click your mobile service.

    2. Click the Dashboard tab and make a note of the Site URL value.

    3. Navigate to the My Applications page in the Live Connect Developer Center, and log on with your Microsoft account, if required.

    4. Click Create application, then type an Application name and click I accept.

      This registers the application with Live Connect.

    5. Click Application settings page, then API Settings and make a note of the values of the Client ID and Client secret.

      Security Note

      The client secret is an important security credential. Do not share the client secret with anyone or distribute it with your app.

    6. In Redirect domain, enter the URL of your mobile service, and then click Save.

    7. Back in the Management Portal, click the Identity tab, enter the Client Id and Client Secret obtained at the previous step in the microsoft account settings, and click Save.

    Restrict permissions to authenticated users
    1. In the Management Portal, click the Data tab, and then click the TodoItem table.

    2. Click the Permissions tab, set all permissions to Only authenticated users, and then click Save. This will ensure that all operations against the TodoItem table require an authenticated user. This also simplifies the scripts in the next tutorial because they will not have to allow for the possibility of anonymous users.

    Define server side scripts

    Server scripts are registered in a mobile service and can be used to perform a wide range of operations on data being inserted and updated, including validation and data modification. In this sample, they are used to validate data, retrieves data from identity providers, send push notifications and communicate with BizTalk Server via Windows Azure Service Bus. For more information on server scripts, see the following resources:

    To use Windows Azure Service Bus, you need to use the Node.js azure package in server scripts. This package includes a set of convenience libraries that communicate with the storage REST services. For more information on the Node.js azure package, see the following resources:

    Follow these steps to create server scripts:

    1. In the Management Portal, click the Data tab, and then click the TodoItem table.
    2. Click the scripts tab and select the insert, update, read or del script from the drow-down list.
    3. Modify the code of the selected script to add your business logic to the function. …

    Paolo continues with source code for the server scripts.

    Configure Git Source Control

    The source control support provides a Git repository as part your mobile service, and it includes all of your existing Mobile Service scripts and permissions. You can clone that git repository on your local machine, make changes to any of your scripts, and then easily deploy the mobile service to production using Git. This enables a really great developer workflow that works on any developer machine (Windows, Mac and Linux). To configure the Git source control proceed as follows:

    1. Navigate to the dashboard for your mobile service and select the Set up source control link:
    2. If this is your first time enabling Git within Windows Azure, you will be prompted to enter the credentials you want to use to access the repository:
    3. Once you configure this, you can switch to the CONFIGURE tab of your Mobile Service and you will see a Git URL you can use to use your repository:

    You can use the GIT URL to clone the repository locally using Git from the command line:

    cd C:\  
    mkdir Git  
    cd Git  
    git clone https://todolist.scm.azure-mobile.net/todolist.git

    You can make changes to the code of server scripts and then upload changes to your mobile service using the following script.

    git add -A . 
    git commit -m "Modified calculator server script" 
    git push
    Visual Studio Solution

    The Visual Studio solution includes the following projects:

    • BusinessLogic: contains helper classes used by BizTalk Server orchestrations.

    • Orchestrations: contains two orchestrations:

      • GetUserAddress
      • StoreTodoItem
    • Schemas: contains XML schemas for the messages exchanged by BizTalk Server with the Mobile Service via Windows Azure Service Bus.
    • Maps: contains the maps uses by the BizTalk Server application.

    • HTML5: contains the HTML5/JavaScript client for the mobile service.

    • WindowsPhone8: contains the Windows Phone 8 app that can be used to test the mobile service.

    • WindowsStoreApp: contains the Windows Store app that can be used to test the mobile service.

    NOTE: the WindowsStoreApp project uses the Windows Azure Mobile Services NuGet package. To recuce the size of tha zip file, I deleted some of the asemblies from the packages folder. To repair the solution, make sure to right click the solution and select Enable NuGet Package Restore as shown in the picture below. For more information on this topic, see the following post.

    BizTalk Server Application

    Proceed as follows to create the TodoItem BizTalk Server application:

    • Open the solution in Visual Studio 2012 and deploy the Schemas, Maps and Orchestration to create the TodoItem application.
    • Open the Binding.xml file in the Setup folder and replace the [YOUR-SERVICE-BUS-NAMESPACE] placeholder with the name of your Windows Azure Service Bus namespace. 
    • Open the BizTalk Server Administration Console and import the binding file to ceate Receive Ports, Receive Locations and Send Ports.
    • Open the WCF-BasicHttpRelay Receive Location, click the Configure button:
    • Click the Edit button in the Access Control Service section under the Security tab.
    • Define the ACS STS uri, Issuer Name and Issuer Secret:
    • Open the SB-Messaging Receive Location, click the Configure button:
    • Define the ACS STS uri, Issuer Name and Issuer Secret under the Authentication tab:
    • Open the FILE Send Port and click the Configure button:
    • Enter the path of the Destination folder where notification messages sent by th Mobile Service via Service Bus are stored: 
    HTML5/JavaScript Client

    The following figure shows the HTML5/JavaScript application that you can use to test the mobile service.

    Paolo continues with source code.

    Conclusions

    Mobile services can easily be extended to get advantage of the services provided by Windows Azure. demonstrates how to integrate Windows Azure Mobile Service with line of business applications, running on-premises or in the cloud, via BizTalk Server 2013, Service Bus Brokered Messaging and Service Bus Relayed Messaging. The Access Control Service is used to authenticate Windows Azure Mobile Services against the Windows Azure Service Bus. In this scenario, BizTalk Server 2013 can run on-premises or in a Virtual Machine on Windows Azure. See also the following articles on Windows Azure Mobile Services:


    Clemens Vasters (@clemensv) posted Blocking outbound IP addresses. Again. No. on 9/28/2013:

    imageJust replied yet again to someone whose customer thinks they're adding security by blocking outbound network traffic to cloud services using IP-based allow-lists. They don't.

    Service Bus and many other cloud services are multitenant systems that are shared across a range of customers. The IP addresses we assign come from a pool and that pool shifts as we optimize traffic from and to datacenters. We may also move clusters between datacenters within one region for disaster recovery, should that be necessary. The reason why we cannot give every feature slice an IP address is also that the world has none left. We’re out of IPv4 address space, which means we must pool workloads.

    imageThe last points are important ones and also shows how antiquated the IP-address lockdown model is relative to current practices for datacenter operations. Because of the IPv4 shortage, pools get acquired and traded and change. Because of automated and semi-automated disaster recovery mechanisms, we can provide service continuity even if clusters or datacenter segments or even datacenters fail, but a client system that’s locked to a single IP address will not be able to benefit from that. As the cloud system packs up and moves to a different place, the client stands in the dark due to its firewall rules. The same applies to rolling updates, which we perform using DNS switches.

    image_thumb75_thumb3_thumb_thumb_thu[11]The state of the art of no-downtime datacenter operations is that workloads are agile and will move as required. The place where you have stability is DNS.

    Outbound Internet IP lockdowns add nothing in terms of security because workloads increasingly move into multitenant systems or systems that are dynamically managed as I’ve illustrated above. As there is no warning, the rule may be correct right now and pointing to a foreign system the next moment. The firewall will not be able to tell. The only proper way to ensure security is by making the remote system prove that it is the system you want to talk to and that happens at the transport security layer. If the system can present the expected certificate during the handshake, the traffic is legitimate. The IP address per-se proves nothing. Also, IP addresses can be spoofed and malicious routers can redirect the traffic. The firewall won’t be able to tell.

    With most cloud-based services, traffic runs via TLS. You can verify the thumbprint of the certificate against the cert you can either set yourself, or obtain from the vendor out-of-band, or acquire by hitting a documented endpoint (in Windows Azure Service Bus, it's the root of each namespace). With our messaging system in ServiceBus, you are furthermore encouraged to use any kind of cryptographic mechanism to protect payloads (message bodies). We do not evaluate those for any purpose. We evaluate headers and message properties for routing. Neither of those are logged beyond having them in the system for temporary storage in the broker.

    The server having access to Service Bus should have outbound Internet access based on the server’s identity or the running process’s identity. This can be achieved using IPSec between the edge and the internal system. Constraining it to the Microsoft DC ranges it possible, but those ranges shift and expand without warning.

    The bottom line here is that there is no way to make outbound IP address constraints work with cloud systems or high availability systems in general.

    image_thumb11_thumb2_thumb_thumb


    <Return to section navigation list>

    Windows Azure Access Control, Active Directory, Identity and Workflow

    • Steven Martin (@stevemar_msft) posted Announcing General Availability of Windows Azure Multi-Factor Authentication to the Windows Azure Team blog on 9/26/2013:

    imageIdentity and access management is an anchor for security and top of mind for enterprise IT departments. It is key to extending anytime, anywhere access to employees, partners, and customers. Today, we are pleased to announce the General Availability of Windows Azure Multi-Factor Authentication - delivering increased access security and convenience for IT and end users.

    Multi-Factor Authentication quickly enables an additional layer security for users signing in from around the globe. In addition to a username and password, users may authenticate via:

    1. An application on their mobile device.
    2. Automated voice call.
    3. Text message with a passcode.

    It’s easy and meets user demand for a simple sign-in experience.

    Windows Azure Multi-Factor Authentication can be configured in minutes for the many applications that require additional security, including:

      • On-Premises VPNs, Web Applications, and More -- Run the Multi-Factor Authentication Server on your existing hardware or in a Windows Azure Virtual Machine. Synchronize with your Windows Server Active Directory for automated user set up.
      • image_thumb7_thumb_thumb_thumbCloud Applications like Windows Azure, Office 365, and Dynamics CRM -- Enable Multi-Factor Authentication for Windows Azure AD identities with the flip of a switch, and users will be prompted to set up multi-factor the next time they sign-in. 
      • Custom Applications -- Use our SDK to build Multi-Factor Authentication phone call and text message authentication into your application’s sign-in or transaction processes.

    Windows Azure Multi-Factor Authentication offers two pricing options: $2 per user per month or $2 for 10 authentications. Visit the pricing page to learn more.

    For details on enabling Windows Azure Multi-Factor Authentication, please visit Scott Guthrie’s blog and check out this video. To get started with the new Multi-Factor Authentication service, visit the Windows Azure Management Portal and let us know what you think at @WindowsAzure


    <Return to section navigation list>

    Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

    • Scott Guthrie (@scottgu) reported release of Windows Azure: New Virtual Machine, Active Directory, Multi-Factor Auth, Storage, Web Site and Spending Limit Improvements in a 9/26/2013 post:

    imageThis week we released some great updates to Windows Azure.  These new capabilities include:

    • Compute: New 2-CPU Core 14 GB RAM instance option
    • Virtual Machines: Support for Oracle Software Images, Management Operations on Stopped VMs
    • Active Directory: Richer Directory Management and General Availability of Multi-Factor Authentication Support
    • Spending Limit: Reset your Spending Limit, Virtual Machines are no longer deleted if it is hit
    • Storage: New Storage Client Library 2.1 Released
    • Web Sites: IP and Domain Restriction Now Supported

    image_thumb75_thumb3_thumb_thumb_thu[3]All of these improvements are now available to use immediately.  Below are more details about them.

    Compute: New 2-CPU Core 14 GB RAM instance

    This week we released a new memory-intensive instance for Windows Azure. This new instance, called A5, has two CPU cores and 14 gigabytes (GB) of RAM and can be used with Virtual Machines (both Windows and Linux) and Cloud Services:

    clip_image001

    You can begin using this new A5 compute option immediately.  Additional information on pricing can be found in the Cloud Services and Virtual Machines sections of our pricing details pages on the Windows Azure website.

    Virtual Machines: Support for Oracle Software Images

    Earlier this summer we announced a strategic partnership between Microsoft and Oracle, and that we would enable support for running Oracle software in Windows Azure Virtual Machines.

    Starting today, you can now deploy pre-configured virtual machine images running various combinations of Oracle Database, Oracle WebLogic Server, and Java Platform SE on Windows, with licenses for the Oracle software included.  These ready-to-deploy Oracle software images enable rapid provisioning of cost-effective cloud environments for development, testing, deployment, and easy scaling of enterprise applications.  The images can now be easily selected in the standard “Create Virtual Machine” wizard within the Windows Azure Management Portal:

    clip_image002

    During preview, these images are offered for no additional charge on top of the standard Windows Server VM rate.  After the preview period ends, these Oracle images will be billed based on the total number of minutes the VMs run in a month.  With Oracle license mobility, existing Oracle customers that are already licensed on Oracle software also have the flexibility to deploy them on Windows Azure. 

    To learn more about Oracle on Windows Azure, visit windowsazure.com/oracle and read the technical walk-through documentation for the Oracle Images.

    Virtual Machines: Management Operations on Stopped VMs

    Starting with this week’s release, it is now possible to perform management operations on stopped/de-allocated Virtual Machines.  Previously a VM had to be running in order to do operations like change the VM size, attach and detach disks, configure endpoints and load balancer/availability settings.  Now it is possible to do all of these on stopped VMs without having to boot them:

    clip_image003

    Active Directory: Create and Manage Multiple Active Directories

    Starting with this week’s release it is now possible to create and manage multiple Windows Azure Active Directories in a single Windows Azure subscription (previously only one directory was supported and once created you couldn’t delete it).  This is useful both for development/test scenarios as well as for cases where you want to have separate directory tenants or synchronize with different on-premises domains or forests. 

    Creating a New Active Directory

    Creating a new Active Directory is now really easy.  Simply select New->Application Services->Active Directory->Directory within the management portal:

    clip_image004

    When prompted configure the directory name, default domain name (you can later change this to any custom domain you want – e.g. yourcompanyname.com), and the country or region to use:

    clip_image005

    In a few seconds you’ll have a new Active Directory hosted within Windows Azure that is ready to use for free:

    clip_image006

    You can run and manage your Windows Azure Active Directories entirely in the cloud, or alternatively sync them with an on-premises Active Directory deployment - which allows you to automatically synchronize all of your on-premises users into your Active Directory in the cloud.  This later option is very powerful, and ensures that any time you add or remove a user in your on-premises directory it is automatically reflected in the cloud as well.

    You can use your Windows Azure Active Directory to manage identity access to custom applications you run and host in the cloud (and there is new support within ASP.NET in the VS 2013 release that makes building these SSO apps on Windows Azure really easy).  You can also use Windows Azure Active Directory to securely manage the identity access of cloud based applications like Office 365, SalesForce.com, and other popular SaaS solutions.

    Additional New Features

    In addition to enabling the ability to create multiple directories in a single Windows Azure subscription, this week’s release also includes several additional usability enhancements to the Windows Azure Active Directory management experience:

    • With this week’s release, we have added the ability to change the name of a directory after its created (previously it was fixed at creation time).
    • As an administrator of a directory, you can now add users from another directory of which you’re a member. This is useful, for example, in the scenario where there are other members of your production directory who will need to collaborate on an application that is under development or testing in a non-production environment. A user can be a member of up to 20 directories.
    • If you use a Microsoft account to access Windows Azure, and you use a different organizational account to manage another directory, you may find it convenient to manage that second directory with your Microsoft account. With this release, we’ve made it easier to configure a Microsoft account to manage an existing Active Directory. Now you can configure this even if the Microsoft account already manages a directory, and even if the administrator account for the other directory doesn’t have a subscription to Windows Azure. This is a common scenario when the administrator account for the other directory was created during signup for Office 365 or another Microsoft service.
    • In this release, we’ve also added support to enable developers to delete single tenant applications that they’ve added to their Windows Azure AD. To delete an application, open the directory in which the application was added, click on the Applications tab, and click Delete on the command bar. An application can be deleted only when External Access is set to ‘Off’ on the configure tab.

    As always, if there are aspects of these new Azure AD experiences that you think are great, or things that drive you crazy, let us know by posting in our forum on TechNet.

    Active Directory: General Availability of Multi-Factor Authentication Service

    With this week’s release we are excited to ship the general availability release of a great new service: the Windows Azure Multi-Factor Authentication (MFA) Service.  Windows Azure Multi-Factor Authentication is a managed service that makes it easy to securely manage user access to Windows Azure, Office 365, Intune, Dynamics CRM and any third party cloud service that supports Windows Azure Active Directory.  You can also use it to securely control access to your own custom applications that you develop and host within the cloud.

    Windows Azure Multi-Factor Authentication can also be used with on-premise scenarios. You can optionally download our new Multi-Factor Authentication Server for Windows Server Active Directory and use it to protect on-premise applications as well.

    Getting Started

    To enable multi-factor authentication, sign-in to the Windows Azure Management Portal and select New->Application Services->Active Directory->Multi-Factor Auth Provider and choose the “Quick Create” option.  When you create the service you can point it at your Windows Azure Active Directory and choose from one of two billing models (per user pricing, or per authentication pricing):

    image

    Once created the Windows Azure Multi-Factor Authentication service will show up within the “Multi-Factor Auth Providers” section of the Active Directory extension:

    image

    You can then manage which users in your directory have multi-factor authentication enabled by drilling into the “Users” tab of your Active Directory and then click the “Manage Multi-Factor Auth” button:

    image

    Once multi-factor authentication is enabled for a user within your directory they will be able to use a variety of secondary authentication techniques including verification via a mobile app, phone call, or text message to provide additional verification when they login to an app or service.  The management and tracking of this is handled automatically for you by the Windows Azure Multi-Factor Authentication Service.

    Learn More

    You can learn more about today’s release from this 6 minute video on Windows Azure Multi-Factor Authentication. 

    Here are some additional videos and tutorials to learn even more:

    Start making your applications and systems more secure with multi-factor authentication today!  And give us your feedback and feature requests via the MFA forum.

    Billing: Reset your Spending Limit on MSDN subscriptions

    When you sign-up for Windows Azure as a MSDN customer you automatically get a MSDN subscription created for you that enables deeply discounted prices and free “MSDN credits” (up to $150 each month) that you can spend on any resources within Windows Azure.  I blogged some details about this last week.

    By default MSDN subscriptions in Windows Azure are created with what is called a “Spending Limit” which ensures that if you ever use up all of the MSDN credits you still don’t get billed – as the subscription will automatically suspend when all of the free credits are gone (ensuring your bill is never more than $0).

    You can optionally remove the spending limit if you want to use more than the free credits and pay any overage on top of them.  Prior to this week, though, once the spending limit was removed there was no way to re-instate it for the next billing cycle.

    Starting with this week’s release you can now:

    • Remove the spending limit only for the current billing cycle (ideal if you know that it is a one time spike)
    • Remove the spending limit indefinitely if you expect to continue to have higher usage in future
    • Reset/Turn back on the spending limit from the next billing cycle forward in case you’ve already turned it off

    To enable or reset your spending limit, click the “Subscription” button in the top of the Windows Azure Management Portal and the click the “Manage your subscriptions” link within it:

    image

    This will take you to the Windows Azure subscription management page (which lists all of the Windows Azure subscriptions you have active).  Click your MSDN subscription to see details on your account – including usage data on how much services you’ve used on it:

    image

    Above you can see usage data on my personal MSDN subscription.  I’ve done a lot of talks recently and have used up my free $150 credits for the month and have $23.64 in overages.  I was able to go above $0 on the subscription because I’ve turned off my spending limit (this is indicated in the text I’ve highlighted in red above).

    If I want to reapply the spending limit for the next billing cycle (which starts on October 3rd) I can now do so by clicking the “Click here to change the spending limit option” link.  This will bring up a dialog that makes it really easy for me to re-active the spending limit starting the next billing cycle:

    image

    We hope this new flexibility to turn the spending limit on and off enables you to use your MSDN benefits even more, and provides you with confidence that you won’t inadvertently do something that causes you to have to pay for something you weren’t expecting to.

    Billing: Subscription suspension no longer deletes Virtual Machines

    In addition to supporting the re-enablement of the spending limit, we also made an improvement this week so that if your MSDN (or BizSpark or Free trial) subscription does trigger the spending limit we no longer delete the Virtual Machines you have running.

    Previously, Virtual Machines deployed in suspended subscriptions would be deleted when the spending limit was passed (the data drives would be preserved – but the VM instances themselves would be deleted). Now when a subscription is disabled, VMs deployed inside it will simply move into the stopped de-provision state we recently introduced (which allows a VM to stop without incurring any billing).

    This allows the Virtual Machines to be quickly restarted with all the previously attached disks and endpoints when a fresh monetary credit is applied or the subscription is converted into a paid subscription. As a result, customers don’t have to worry about losing their Virtual Machines when spending limits are reached, and they can quickly return back to business by re-starting their VMs immediately.

    Storage: New .NET Storage Client Library 2.1 Release

    Earlier this month we released a major update of our Windows Azure Storage Client Library for .NET.  The new 2.1 release includes a ton of awesome new features and capabilities:

    • Improved Performance
    • Async Task<T> support
    • IQueryably<T> Support for Tables
    • Buffer Pooling Support
    • .NET Tracing Integration
    • Blob Stream Improvements
    • And a lot more…

    Read this detailed blog post about the Storage Client Library 2.1 Release from the Windows Azure Storage Team to learn more.  You can install the Storage Client Library 2.1 release and start using it immediately using NuGet.

    Web Sites: IP and Domain Restriction Now Supported

    This month we have also enabled the IP and Domain Restrictions feature of IIS to be used with Windows Azure Web Sites. This provides an additional security option that can also be used in combination with the recently enabled dynamic IP address restriction (DIPR) feature (http://blogs.msdn.com/b/windowsazure/archive/2013/08/27/confirming-dynamic-ip-address-restrictions-in-windows-azure-web-sites.aspx).

    Developers can use IP and Domain Restrictions to control the set of IP addresses, and address ranges, that are either allowed or denied access to their websites. With Windows Azure Web Sites developers can enable/disable the feature, as well as customize its behavior, using web.config files located in their website.

    There is an overview of the IP and Domain Restrictions feature from IIS available here:  http://www.iis.net/configreference/system.webserver/security/ipsecurity. A full description of individual configuration elements and attributes is available here: http://msdn.microsoft.com/en-us/library/ms691353(v=vs.90).aspx

    The example configuration snippet below shows an ipSecurity configuration that only allows access to addresses originating from the range specified by the combination of the ipAddress and subnetMask attributes. Setting allowUnlisted to false means that only those individual addresses, or address ranges, explicitly specified by a developer will be allowed to make HTTP requests to the website. Setting the allowed attribute to true in the child add element indicates that the address and subnet together define an address range that is allowed to access the website.

    <system.webServer>

    <security>

    <ipSecurity allowUnlisted="false" denyAction="NotFound">

    <add allowed="true" ipAddress="123.456.0.0" subnetMask="255.255.0.0"/>

    </ipSecurity>

    </security>

    </system.webServer>

    If a request is made to a website from an address outside of the allowed IP address range, then an HTTP 404 not found error is returned as defined in the denyAction attribute.

    One final note, just like the companion DIPR feature, Windows Azure Web Sites ensures that the client IP addresses “seen” by the IP and Domain Restrictions module are the actual IP addresses of Internet clients making HTTP requests.

    Summary

    Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it.


    • Philip Fu posted [Sample Of Sep 25th] How to control Windows Azure VM with the REST API to the Microsoft All-In-One Code Framework blog on 9/25/2013:

    imageSample Download :  http://code.msdn.microsoft.com/How-to-program-control-838bd90b

    To operate Windows Azure IaaS virtual machine, using the azure power shell isn't the only way. We also can use management service API to achieve this target.

    This sample will use GET/POST/DELETE requests to operate the virtual machine.

    imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


    Steven Martin (@stevemar_msft) announced that you can Deploy Pre-configured Oracle VMs on Windows Azure on 9/23/2013:

    imageBuilding on the recent announcement of strategic partnershi between Microsoft and Oracle; today we are making a  number of popular Oracle software configurations available through the Windows Azure image gallery.  Effective immediately, customers can deploy pre-configured virtual machine images running various combinations of Oracle Database, Oracle WebLogic Server, and Java Platform SE on Windows Server, with licenses for the Oracle software included. During the preview, these images are offered for no additional charge beyond the regular compute costs. After the preview period ends, Oracle images will be billed based on the total number of minutes VMs run in a month; details on the VM pricing to be announced at a later date. 

    imageThese ready-to-deploy images enable rapid provisioning of cost-effective cloud environments for development and testing as well as easy scaling of enterprise Oracle applications. With Oracle license mobility, existing customers who are licensed on Oracle software can now deploy on Windows Azure and take advantage of powerful management features, cross-platform tools and automation capabilities.

    image_thumb75_thumb3_thumb_thumb_thu[19]Additionally, Oracle now offers Oracle Linux, Oracle Linux with Oracle Database, and Oracle Linux with WebLogic Server in the Windows Azure image gallery for customers who are licensed to use these products.

    To get started with Oracle on Windows Azure, visit www.windowsazure.com/oracle and the technical walk-through documentation for Oracle Images. Don’t forget to tell us what you think at @WindowsAzure!


    The Windows Azure Team posted Pricing Details for Oracle Software VMs on 9/23/2013:

    image


    The Windows Azure Technical Support (WATS) Team answered Why did my Azure VM restart? on 9/23/2013:

    imageAn unexpected restart of an Azure VM is an issue that commonly results in a customer opening a support incident to determine the cause of the restart. Hopefully the explanation below provides details to help understand why an Azure VM could have been restarted.

    Windows Azure updates the host environment approximately once every 2-3 months to keep the environment secure for all applications and virtual machines running on the platform. This update process may result in your VM restarting, causing downtime to your applications/services hosted by the Virtual Machines feature. There is no option or configuration to avoid these host updates. In addition to platform updates, Windows Azure service healing occurs automatically when a problem with a host server is detected and the VMs running on that server are moved to a different host. When this occurs, you loose connectivity to VM during the service healing process. After the service healing process is completed, when you connect to VM, you will likely to find a event log entry indicating VM restart (either gracefully or unexpected). Because of this, it is important to configure your VMs to handle these situations in order to avoid downtime for your applications/services.

    imageTo ensure high availability of your applications/services hosted in Windows Azure Virtual Machines,  we recommend using multiple VMs with availability sets. VMs in the same availability set are placed in different fault domains and update domains so that planned updates, or unexpected failures, will not impact all the VMs in that availability set. For example, if you have two VMs and configure them to be part of an availability set, when a host is being updated, only one VM is brought down at a time. This will provide high availability since you have one VM available to serve the user requests during the host update process. Mark Russinovich has posted a great blog post which explains Windows Azure Host updates in detail. Managing the high availability is detailed here.

    While availability sets help provide high availability for your VMs, we recognize that proactive notification of planned maintenance is a much-requested feature, particularly to help prepare in a situation where you have a workload that is running on a single VM and is not configured for high availability. While this type of proactive notification of planned maintenance is not currently provided, we encourage you to provide comments on this topic so we can take the feedback to the product teams.


    Cory Fowler (@SyntaxC4) posted Important: Update to Default PHP Runtime on Windows Azure Web Sites on 9/24/2013:

    imageIn upcoming weeks Windows Azure Web Sites will update the default PHP version from PHP 5.3 to PHP 5.4. PHP 5.3 will continue to be available as a non-default option. Customers who have not explicitly selected a PHP version for their site and wish the site to continue to run using PHP 5.3 can select this version at any time from the Windows Azure Management Portal, Windows Azure Cross Platform Command Line Tools, or Windows Azure PowerShell Cmdlets. The Windows Azure Web Sites team will also start onboarding PHP 5.5 as an option in the near future.

    Explicitly Selecting a PHP version in Windows Azure Web Sites

    imageIf you wish to continue to run PHP 5.3 in your Windows Azure Web Site, follow one of the options below to explicitly set the PHP runtime of your site.

    Selecting the PHP version from the Windows Azure Management Portal

    imageAfter logging into the Windows Azure Management Portal, click on the Web Sites navigation item from the left hand menu.

    image

    Select the Web Site you wish to set the PHP Version for, then Click the arrow to navigate to the details screen.

    image

    Click on the CONFIGURE tab.

    image

    Ensure the value selected beside the PHP Version label is 5.3.

    image

    Perform any action which will require a save that will indicate the PHP 5.3 selection is intentional and not a reflection of the current platform default:

    • Add an App Setting

    • Temporarily toggle to PHP 5.4 or OFF

    • Enable Application or Site Diagnostics

    • Add/Change the Default documents

    Click on the Save button in the command bar at the bottom of the portal.

    image

    Selecting the PHP version from the Windows Azure Cross Platform Command Line Tools

    Run the following command from your terminal of choice, be sure that the Windows Azure Cross-Platform CLI Tools are installed and the appropriate subscription is selected.

    azure site set --php-version 5.3 <site-name>

    Selecting the PHP version from the Windows Azure PowerShell Cmdlets

    Run the following command from a PowerShell console, be sure that the Windows Azure PowerShell Cmdlets are installed and the appropriate subscription is selected.

    Set-AzureWebsite -PhpVersion 5.3 -Name <site-name>


    Kevin Remde (@KevenRemde) reported BREAKING NEWS: A new “memory intensive” VM size in Windows Azure on 9/24/2013:

    imageIn case you haven’t noticed, Microsoft has added a new virtual machine size available in Windows Azure.  To go along with our really big “A6” and “A7” sizes, there is now an “A5” machine size…

    Memory-hogger size

    So, if don’t have a need for so many processors, but need a bigger chunk of RAM, you’re in luck.

    For more information, please refer to the Cloud Services or Virtual Machines sections of the Pricing Details webpages.

    Andy Cross (@andygareweb) described Diagnosing a Windows Azure Website Github integration error in an 9/24/2013 post:

    imageYesterday I experienced an issue when trying to integrate a Windows Azure Website with Github. Specifically, my code would deploy from the master branch, but if I chose a specific other branch called ‘prototype’ I received a fetch error in the Windows Azure Management Portal:

    imageThis error has been reported to the team and I’m sure will be rectified so nobody else will run into it, but at Cory Fowler’s (@syntaxC4) prompting I wanted to document the steps I took to debug this as these steps may be useful to anyone struggling to debug a Windows Azure Website integration.

    Scenario

    In my scenario I had a project with a series of subfolders in my github repo. The project has progressed from a prototype to a full build but we were required to persist the prototype for design reference. We could have created ‘prototype’ without changing the solution structure, but as in all real world scenarios the requirement to leave the prototype available emerged only when we removed it and had changed the URL structure. We were only happy to continue working on new code if we could label the prototype or somehow leave it in a static state while the codebase moved on. This requirement is easily tackled by Windows Azure Websites and its Github integration; we changed the solution structure to have subfolders, created a new branch ‘prototype’ and continued our main work in ‘master’. Our ‘master’ branch has the additional benefit of having the prototype available for reference and quick applications of code change if we want to pivot our approach.

    We then created two Windows Azure Websites (for free, wow!). In order to allow Windows Azure Websites to deploy the correct code for each, we created a .deployment file in each repository. In this .deployment file we inform Windows Azure Websites (through its Kudu deployment mechanism) that it should perform a custom deployment.

    For the ‘master’ branch we want to deploy the /client folder, which involves a simple .deployment file containing

    [config]
    project = client

    For the ‘prototype’ branch we want to deploy the /prototype folder, which involves a simple .deployment file containing

    [config]
    project = prototype

    As you can see, these two branches then can evolve independently (although the prototype should be static).

    Problems Start

    The problems began when I tried to create a Windows Azure Website and integrate it with Github for the ‘prototype’ branch. No matter what I did, I couldn’t get the Github fetch to work:

    At this point I stuck an email to some guy and David Ebbo (@davidebbo) prompted me to stop being lazy and look for some deployment logs. Powershell is your friend when it comes to debugging Windows Azure Websites, so I started there.

    The first thing to do is to get the Logs using ‘Save-AzureWebsitesLog’:

    PS C:\> Save-AzureWebsiteLog -Name partyr
    Save-AzureWebsiteLog : Access to the path 'C:\logs.zip' is denied.
    At line:1 char:1
    + Save-AzureWebsiteLog -Name partyr
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     + CategoryInfo : CloseError: (:) [Save-AzureWebsiteLog], UnauthorizedAccessException
     + FullyQualifiedErrorId : Microsoft.WindowsAzure.Management.Websites.SaveAzureWebsiteLogCommand

    Oops, helps if pwd is something writable to the current user…

    PS C:\> cd temp
    PS C:\temp> Save-AzureWebsiteLog -Name myWebsite
    PS C:\temp> ls
     Directory: C:\temp
    Mode LastWriteTime Length Name
    ---- ------------- ------ ----
    -a--- 23/09/2013 18:48 24406 logs.zip

    Ok great. We have some Logs. Lets take a look!

    Inside the zip at the location: /deployments/temp-59fd85ea/ are two files, log.xml and status.xml. These didn’t prove very useful :-)

    Log.xml:

     <?xml version="1.0" encoding="utf-8"?>
     <entries>
     <entry time="2013-09-23T17:45:38.0768333Z" id="e7f9db74-a9e5-4738-93ee-028d051b6fd6" type="0">
     <message>Fetching changes.</message>
     </entry>
     </entries>

    Status.xml:

     <?xml version="1.0" encoding="utf-8"?>
     <deployment>
     <id>temp-59fd85ea</id>
     <author>N/A</author>
     <deployer>GitHub</deployer>
     <authorEmail>N/A</authorEmail>
     <message>Fetch from git@github.com:elastacloud/asosmyWebsite.git</message>
     <progress></progress>
     <status>Failed</status>
     <statusText></statusText>
     <lastSuccessEndTime />
     <receivedTime>2013-09-23T17:45:37.9987137Z</receivedTime>
     <startTime>2013-09-23T17:45:37.9987137Z</startTime>
     <endTime>2013-09-23T17:45:40.8578955Z</endTime>
     <complete>True</complete>
     <is_temp>True</is_temp>
     <is_readonly>False</is_readonly>
     </deployment>

    In the zip file at the location /LogFiles/Git/trace is a file that has much more useful information.

    Part way down this encoded xml file is the error:

     <step title="Error occurred" date="09/23 17:16:12" type="error" text="fatal: ambiguous argument 'prototype': both revision and filename&#xA;Use '--' to separate filenames from revisions&#xA;&#xD;&#xA;D:\Program Files (x86)\Git\bin\git.exe log -n 1 prototype" stackTrace=" at Kudu.Core.Infrastructure.Executable.Execute(ITracer tracer, String arguments, Object[] args)&#xD;&#xA; at Kudu.Core.SourceControl.Git.GitExeRepository.GetChangeSet(String id)&#xD;&#xA; at Kudu.Services.FetchHandler.&lt;PerformDeployment&gt;d__c.MoveNext()&#xD;&#xA;--- End of stack trace from previous location where exception was thrown ---&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)&#xD;&#xA; at Kudu.Services.FetchHandler.&lt;&gt;c__DisplayClass1.&lt;&lt;ProcessRequestAsync&gt;b__0&gt;d__3.MoveNext()&#xD;&#xA;--- End of stack trace from previous location where exception was thrown ---&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)&#xD;&#xA; at Kudu.Contracts.Infrastructure.LockExtensions.&lt;TryLockOperationAsync&gt;d__0.MoveNext()&#xD;&#xA;--- End of stack trace from previous location where exception was thrown ---&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)&#xD;&#xA; at Kudu.Services.FetchHandler.&lt;ProcessRequestAsync&gt;d__6.MoveNext()&#xD;&#xA;--- End of stack trace from previous location where exception was thrown ---&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)&#xD;&#xA; at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)&#xD;&#xA; at System.Web.TaskAsyncHelper.EndTask(IAsyncResult ar)&#xD;&#xA; at System.Web.HttpTaskAsyncHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result)&#xD;&#xA; at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()&#xD;&#xA; at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean&amp; completedSynchronously)" elapsed="0" />
     <step title="Outgoing response" date="09/23 17:16:12" type="response" statusCode="500" statusText="Internal Server Error" Cache-Control="private" X-AspNet-Version="4.0.30319" Content-Type="text/html; charset=utf-8" elapsed="0" />
     </step>

    I missed this at first amongst all the the noise in this file. What I did instead is give up on notepad and xml and run a different powershell command: Get-AzureWebsiteLog -name myWebsite -Tail, which connects powershell to a real time stream of the Website log. Really really neat.

    Clicking the sync button in Deployments of Windows Azure Websites Management Portal immediately showed activities in the Powershell console:

    PS C:\temp> Get-AzureWebsiteLog -Name myWebsite -Tail
     2013-09-23T17:51:01 Welcome, you are now connected to log-streaming service.
     2013-09-23T17:51:02 Error occurred, type: error, text: fatal: ambiguous argument 'prototype': both revision and file
     name
     Use '--' to separate filenames from revisions
    D:\Program Files (x86)\Git\bin\git.exe log -n 1 prototype, stackTrace: at Kudu.Core.Infrastructure.Executable.Execut
     e(ITracer tracer, String arguments, Object[] args)
     at Kudu.Core.SourceControl.Git.GitExeRepository.GetChangeSet(String id)
     at Kudu.Services.FetchHandler.<PerformDeployment>d__c.MoveNext()
     --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Kudu.Services.FetchHandler.<>c__DisplayClass1.<<ProcessRequestAsync>b__0>d__3.MoveNext()
     --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Kudu.Contracts.Infrastructure.LockExtensions.<TryLockOperationAsync>d__0.MoveNext()
     --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Kudu.Services.FetchHandler.<ProcessRequestAsync>d__6.MoveNext()
     --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at System.Web.TaskAsyncHelper.EndTask(IAsyncResult ar)
     at System.Web.HttpTaskAsyncHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result)
     at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
     at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
     2013-09-23T17:51:02 Outgoing response, type: response, statusCode: 500, statusText: Internal Server Error, Cache-Con
     trol: private, X-AspNet-Version: 4.0.30319, Content-Type: text/html; charset=utf-8

    Fantastic! There’s our error, and with less noise than the xml file that I was earlier confused by.

    So that’s my problem, ambiguous argument ‘prototype’: both revision and file
    name
    Use ‘–’ to separate filenames from revisions

    This means my branch in Github is called ‘prototype’ and I have a file (folder technically) called ‘prototype’ in the system and this is ambiguous to the deployment system.

    Now I can’t use — to separate filenames from revisions – I don’t have that level of control over the deployment process. But what I do have control over is the branch name and the folder name. I chose to rename the prototype folder:

    Then I change my .deployment file to deploy the /proto folder:

    [config]
    project = proto

    Pushing these changes immediately solved my issue. As shown by by the continuing Get-AzureWebsiteLog -Name MyWebsite -Tail

    2013-09-23T17:54:53 Fetching changes.
    2013-09-23T17:54:58 Updating submodules.
    2013-09-23T17:55:00 Preparing deployment for commit id '7331a9c9c3'.
    2013-09-23T17:55:01 Generating deployment script.
    2013-09-23T17:55:01 Using the following command to generate deployment script: 'azure site deploymentscript -y --no-
    dot-deployment -r "C:\DWASFiles\Sites\partyr\VirtualDirectory0\site\repository" -o "C:\DWASFiles\Sites\partyr\VirtualDi
    rectory0\site\deployments\tools" --basic --sitePath "C:\DWASFiles\Sites\partyr\VirtualDirectory0\site\repository\proto"
    '.
    2013-09-23T17:55:01 The site directory path: .\proto
    2013-09-23T17:55:01 Generating deployment script for Web Site
    2013-09-23T17:55:01 Generated deployment script files
    2013-09-23T17:55:01 Running deployment command...
    2013-09-23T17:55:01 Command: C:\DWASFiles\Sites\partyr\VirtualDirectory0\site\deployments\tools\deploy.cmd
    2013-09-23T17:55:01 Handling Basic Web Site deployment.
    2013-09-23T17:55:01 KuduSync.NET from: 'C:\DWASFiles\Sites\partyr\VirtualDirectory0\site\repository\proto' to: 'C:\D
    WASFiles\Sites\partyr\VirtualDirectory0\site\wwwroot'
    2013-09-23T17:55:02 Deleting file: 'hubEventListener.js'
    2013-09-23T17:55:02 Deleting file: 'hubEventListener.js.map'
    2013-09-23T17:55:02 Deleting file: 'hubEventListener.ts'
    2013-09-23T17:55:02 Deleting file: 'readme.txt'
    2013-09-23T17:55:02 Copying file: 'index.html'
    2013-09-23T17:55:02 Copying file: 'signalr.html'
    2013-09-23T17:55:02 Deleting file: 'css\readme.txt'
    2013-09-23T17:55:02 Copying file: 'css\bootstrap.min.css'
    2013-09-23T17:55:02 Copying file: 'css\foundation.css'
    2013-09-23T17:55:02 Copying file: 'css\foundation.min.css'
    2013-09-23T17:55:02 Copying file: 'css\normalize.css'
    2013-09-23T17:55:02 Copying file: 'css\party.css'
    2013-09-23T17:55:02 Copying file: 'css\partyr.css'
    2013-09-23T17:55:02 Copying file: 'css\ticker-style.css'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.abide.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.alerts.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.clearing.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.cookie.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.dropdown.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.forms.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.interchange.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.joyride.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.magellan.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.orbit.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.placeholder.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.reveal.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.section.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.tooltips.js'
    2013-09-23T17:55:02 Copying file: 'foundation\foundation.topbar.js'
    2013-09-23T17:55:02 Deleting file: 'img\readme.txt'
    2013-09-23T17:55:02 Copying file: 'img\asos.png'
    2013-09-23T17:55:02 Copying file: 'img\bg.gif'
    2013-09-23T17:55:02 Copying file: 'img\draggable.jpg'
    2013-09-23T17:55:02 Copying file: 'img\facebook_icon.jpg'
    2013-09-23T17:55:02 Copying file: 'img\google_plus_logo.jpg'
    2013-09-23T17:55:02 Copying file: 'img\rand1.jpeg'
    2013-09-23T17:55:02 Copying file: 'img\rand2.jpeg'
    2013-09-23T17:55:02 Copying file: 'img\rand3.jpeg'
    2013-09-23T17:55:02 Copying file: 'img\rand4.jpeg'
    2013-09-23T17:55:02 Copying file: 'img\rand5.jpg'
    2013-09-23T17:55:02 Copying file: 'img\rand6.jpg'
    2013-09-23T17:55:02 Copying file: 'img\rand7.jpg'
    2013-09-23T17:55:02 Copying file: 'img\rand8.jpg'
    2013-09-23T17:55:02 Copying file: 'img\twitter-bird-light-bgs.png'
    2013-09-23T17:55:02 Copying file: 'img\voted.png'
    2013-09-23T17:55:02 Copying file: 'jasmine\SpecRunner.html'
    2013-09-23T17:55:02 Copying file: 'jasmine\lib\jasmine-1.3.1\jasmine-html.js'
    2013-09-23T17:55:02 Copying file: 'jasmine\lib\jasmine-1.3.1\jasmine.css'
    2013-09-23T17:55:02 Omitting next output lines...
    2013-09-23T17:55:03 Finished successfully.
    2013-09-23T17:55:03 Deployment successful.

    The Kudu guys have already tackled the issue (https://github.com/projectkudu/kudu/issues/785) but the above diagnostics should help some of you.

    image_thumb11_thumb_thumb_thumb_thum


    <Return to section navigation list>

    Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses

    ‡ Gaurav Mantri (@gmantri) described a new management tool for Windows Azure subscriptions in his Cynapta – A New Beginning post of 9/27/2013:

    imageIt’s been 6 months since I started Cynapta so I thought I will share with you about what I have been up to for last 6 months and some more details about Cynapta and what we have been doing there.

    Cynapta – What’s that?

    Well, Cynapta is the name of the new company I founded after Cerebrata. A lot of folks asked me how I come up with such quirky names. Well, the answer is rather simple – all the good names have already been chosen so I am left with such names only. I wanted a small name which is somewhat catchy and I could not come up with a better name than this. Plus the domain name was also available.

    What we’ll be doing @ Cynapta?

    imageThat’s a million dollar question. I’m still not ready to spill all beans but there’re a few things I can share – At Cynapta, we are building some Software as a Service (SAAS) applications. They will be hosted in Windows Azure. We have some really interesting products in the pipeline (yes, we have over 2 years of product pipeline) and all of them will be for cloud platforms, primarily Windows Azure but we’ll dabble into other cloud platforms as well. Windows Azure has come a long way since its inception in 2008 and is growing stronger day-by-day however there are lots of “partner opportunities” in the platform. To begin with, we will explore these “partner opportunities” and come up with solutions for those.

    At Cerebrata, we focused mainly on building desktop tools but at Cynapta the focus will be on building web-based applications. No particular reason other than the fact that for the kind of applications we are building, it made sense for us to make them as web-based applications instead of desktop applications.

    We are more or less done with a major part of our first application and hopefully we will do a beta very soon – in a month or so. I will come back here to seek your participation in beta testing. Here’s a screenshot of what we are building:

    image

    What else @ Cynapta?

    Well, apart from making the applications ready for beta, we need to get a blog up and running. In past few months, we as a team have learnt immensely about Windows Azure and would want to share those learning with you. We will blog about the architecture of our applications, try to walk you through the process we went through on making those architecture decisions and building the products. Being a commercial entity, it may not be possible for us to share source code but wherever we can, we will share the source code.

    Team @ Cynapta

    We are now a strong team of 8 people. Unlike Cerebrata where I started with all fresh graduates, this time we have a mix of experienced developers and freshers. Freshers still amaze me with the energy and thoughtfulness while the experienced developers in my team have a very open mind so that’s a very good thing for me. Only guy in the team with rigid thoughts is me so I need to work on thatSmile.

    Here’s a picture of the team @ Cynapta.

    image

    What Else?

    Oh, we have a twitter account for ourselves. Handle for that account is @cynapta. I would appreciate if you could start following it. Currently it does not do anything but I promise it will be active very soon as that would become the official medium for all news about Cynapta.

    Personal Stuff

    On the personal front, things have started to become crazy (in a good way). In past 6 months, I did some consulting work; worked with a bunch of super smart students from my alma mater to help them create a WordPress backup to Windows Azure plugin; hung out at Stack Overflow trying to grab as much +10s and +15s as I can. I was hanging there so much that one of the fellow Windows Azure MVP wondered if I am planning on building a business out of it. Jokes apart, its fun to hang out there. You get to learn so many things and it feels great when you end up helping somebody out. Another reason for hanging out at Stack Overflow is that it gives to great ideas about what kind of products one should build. Folks come there with problems for which no solution exist today. For us, that place is like a gold mine of ideas.

    Apart from that I learnt a lot of new stuff: Cloud Architecture, ASP.NET MVC 4, jQuery, Knockout.js etc. etc. Fun stuff!!! Wrote a lot of code (I mean really a lot of code) as well.

    Closing Thoughts

    All in all last 6 months have been pretty exciting. There’re still a lot of uncertainties but what’s life without some unpredictability. I’m really looking forward to the challenges that lie ahead of us.

    Wish us luck!!!

    Good luck to Guarav and his crew! Nice logo, too.


    ‡ Nick Harris (@cloudnick) and Chris Risner (@chrisrisner) produced Cloud Cover Episode 115: Getting Started with the New Windows Azure Cache Service with guest Haishi Bai on 9/27/2013:

    In this episode Nick Harris and Chris Risner are joined by Haishi Bai - Sr. Technical Evangelist on Windows Azure.  During this episode Haishi demonstrates the Windows Azure Cache Service preview including:

    • What is the NEW Windows Azure Cache Service Preview
    • What's the difference between Windows Azure Cache Service Preview, In-Role Cache and Shared Caching
    • Demonstration of how to provision a  Cache Service Preview and how to use it from Windows Azure Websites

    In the News:

    Like Cloud Cover on Facebook!

    Follow @CloudCoverShow
    Follow @chrisrisner
    Follow @cloudnick
    Follow @haishibai2010


    Gianugo Rabellino (@gianugo) reported Azul Systems Releases Zulu, an OpenJDK Build for Windows Azure, in Partnership with MS Open Tech on 9/25/2013:

    imageToday I’m happy to report the news that our Microsoft Open Technologies, Inc., (MS Open Tech) partner Azul Systems has released the technology preview for Zulu, an OpenJDK build for Windows Servers on the Windows Azure platform. Azul’s new OpenJDK-based offering has passed all Java certification tests and is free and open source.

    Azul’s new build of the community-driven open source Java implementation, known as OpenJDK, is available immediately for free download and use under the terms of the GPLv2 open source license.

    Built and distributed by Azul Systems, Zulu is a JDK (Java Development Kit), and a compliant implementation of the Java Standard Edition (SE) 7 specification.  Zulu has been verified by passing all tests in the Java SE 7 version of the OpenJDK Community TCK (Technology Compatibility Kit).

    image_thumb75_thumb3_thumb_thumb_thu[9]Azul has a lot of information about this exciting news on their website, including this press release that we would like to share.

    With the support of Azul Systems and MS Open Tech, customers will be assured of a high-quality foundation for their Java implementations while leveraging the latest advancements from the community in OpenJDK. The OpenJDK project is supported by a vibrant open source community, and Azul Systems is committed to updating and maintaining its OpenJDK-based offering for Windows Azure, supporting current and future versions of both Java and Windows Server. Deploying Java applications on Windows Azure will be further simplified through the existing open source MS Open Tech Windows Azure Plugin for Eclipse with Java.

    Key details of Azul Zulu include:

    • Free and open source offering, based on OpenJDK
    • Compatible with Java SE 7, verified using Java SE 7 OpenJDK Community TCK
    • Integrated with MS Open Tech’s Windows Azure Plugin for Eclipse with Java tooling
    • Patches and bug fixes contributed back to the OpenJDK community by Azul
    • ISV-friendly binary licensing for easy embedding with 3rd party applications
    • Availability for download and immediate use

    Executives of both companies highlighted the benefits of this new effort:

    Jean Paoli, president of MS Open Tech said, “Java developers have many development and deployment choices for their applications, and today MS Open Tech and Azul made it easier for Java developers to build and run modern applications in Microsoft’s open cloud platform.”

    Scott Sellers, president and CEO of Azul Systems said, “Azul is delighted to announce that Zulu is fully tested, free, open source, and ready for the Java community to download and preview – today. We are looking forward to serving the global Java community with this important new offering for the Azure cloud.”

    Zulu is available for download at www.azulsystems.com/products/zulu. Zulu Community Forums are listed on the Stack Overflow website under the tags “azure zulu” and “azul zulu.”

    MS Open Tech and Azul Systems first announced our partnership on July 24, 2013

    Customers and partners of Microsoft and Azul interested in participating in future Zulu tech previews are also invited to contact Azul at AzureInfo@azulsystems.com for additional information. And of course, please send questions and feedback to our MS Open Tech team directly through our blog.

    No significant articles so far this week.


    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    ‡ Rich Edmonds (@RichEdmonds) reported Microsoft looking to invest $2.7 billion in a new Dutch data center in a 9/28/2013 post to the Windows Phone Central blog:

    Microsoft Store

    imageMicrosoft is looking to invest in the planning and construction of a new data centre at an industry site in Noord-Holland, Netherlands. The price tag on the project? 2 billion euros ($2.7 billion), which would see a new "green" data centre built on a site that will take up 40 acres worth of ground near the A7 highway.

    Numerous countries were reportedly in talks with Microsoft to secure a contract, but the decision was awarded to the Netherlands. How would the data centre be power efficient and supplied by green energy? Local greenhouses use ground-coupled heat exchangers and produce more electricity than required, which will open up new doors for Microsoft's new power-hungry project. Energy supplier Tennet is said to be on the plan as backup.

    imageHeat would also be transferred from the datacentre to the greenhouses, making it a rather lucrative deal for both parties. So what will this mean for consumers? We could well be looking at infrastructure being deployed for Xbox One and other services provided by the company. Adding a data centre to Europe will provide yet more scale to Microsoft's operations in the region. Microsoft already has a local data centre in Amsterdam for its Azure web services. [Emphasis added.]

    It's still early days for this new project, so it's worth noting this deal could fall through completely.

    Source: Tweakers (Dutch); thanks, MartinSpire, for the heads up and translation!


    Kenneth van Surksum (@kennethvs) posted Book: System Center: Designing Orchestrator Runbooks on 9/19/2013:

    imageMicrosoft has released a free ebook titled: "Microsoft System Center: Designing Orchestrator Runbooks". The book which is written by David Ziembicki, Aaron Cushner, Andreas Rynes and Mitch Tulloch contains 182 pages. The book provides a framework for runbook design and IT process automation which will help you to get the most out of System Center 2012 Orchestrator.

    We will provide detailed guidance for creating what we call “modular automation” where small, focused pieces of automation are progressively built into larger and more complex solutions. We detail the concept of an automation library, where over time enterprises build a progressively larger library of interoperable runbooks and components. Finally, we will cover advanced scenarios and design patterns for topics like error handling and logging, state management, and parallelism. But before we dive into the details, we’ll begin by setting the stage with a quick overview of System Center 2012 Orchestrator and deployment scenarios.

    clip_image001

    The book contains the following sections:

    • Introducing System Center 2012
    • System Center Orchestrator
    • Orchestrator architecture and deployment
    • Modular runbook design and development
    • Orchestrator runbook best practices and patterns
    • Modular runbook example
    • Calling and executing Orchestrator runbooks

    image_thumb75_thumb3_thumb_thumb_thu[22]No significant articles so far this week.


    <Return to section navigation list>

    Windows Azure Pack, Hosting, Hyper-V and Private/Hybrid Clouds

    Kenneth van Surksum (@kennethvs) described a Paper: Implementing Hybrid Cloud at Microsoft on 9/19/2013:

    imageMicrosoft has released a paper titled:"Implementing Hybrid Cloud at Microsoft". The paper which contains 6 pages details the steps Microsoft IT has taken by emerging and upgrading technologies, realigning organizational goals and redefinition based on lessons learned during the way towards an organization wide goal "All of Microsoft runs in the cloud"

    The paper covers the following topics:

    • Situation
    • Solution
      • Planning
      • Evaluating available technology
        • Realizing Hybrid Cloud
        • Determining Applicable Delivery Methods
      • Assessing the Current State of IT At Microsoft
        • Infrastructure Readiness
        • Challenges to Public Cloud Adoption
      • Calculating Financial Challenges and Opprtunities
        • Determining organizational readiness
      • Implementing a Hybrid Cloud Strategy
        • Implementing Cloud Computing Management
        • Determining Application Placement and Accelerating Adoption
      • Benefits
      • Best Practices

    clip_image001

    Conclusion:

    It is an exciting time to be in IT at Microsoft. The consumerization of IT is enabling agility and business benefits that were unimaginable even five years ago. As Microsoft IT continues to implement and mold its cloud computing strategy, they understand that many of the factors that will affect this strategy in the future are unknown. Furthermore, the technology surrounding cloud computing is constantly evolving and providing new ways to look at how IT is imagined. As such, Microsoft IT must remain flexible and adaptable as an IT organization, and leverage the growing capabilities of cloud computing at Microsoft.


    Bruno Saille posted Oracle Self Service Kit : See it in action in this video to TechNet’s Bulding Clouds blog on 9/23/2013:

    As a follow up to this series of posts introducing the Oracle Self Service Kit, here is a video going over a quick overview of the kit, as well as a demonstration (deploying a new database on a new dedicated server).

    Oracle Self Service Kit Overview and Demonstration

    Thanks for watching!


    <Return to section navigation list>

    Visual Studio LightSwitch and Entity Framework 4.1+

    Steve Lasker asked and answered What Time Is It? Global Time (Steve Lasker) in a 9/25/2013 post to the Visual Studio LightSwitch Team blog:

    You’re building apps that are hosted in the cloud. Your users can be anywhere across the globe. When you’re viewing a customer service call log, and you see a customer called at 4:30pm and was really upset their service was still down, how do you know how long ago they called? Was the time relevant to your time zone, the customers, or the person who took the call? Was it 5 minutes ago, or 3 hours and 5 minutes ago? Or was it the time zone of the server where the app is hosted? Where is the app hosted? Is it in your office on the west coast, the London office, or is the app hosted in Azure. If it’s in Azure, which data center is it located? Does it matter?

    Some more questions, this time in the form of a riddle. Yes, time is a theme here.

      • What time never occurs?
      • What time happens twice a year?

    What we’re dealing with here is a concept I call global time. At any given time, humans refer to time relevant to a given location. Although time is a constant, the way we refer to time is relative.

    There is a concept called UTC, in which time is relevant to the same location and would be a constant if the whole world would refer to 5:00pm as the same exact point in time no matter where you are. However, as humans we don’t think that way. We like 5:00pm represents the end of a typical work day. We like to know we all generally eat at 12:00 pm, regardless of where we are on the planet. However, when we have to think about customer call logs being created from multiple locations, at any time, it’s almost impossible to read a string of 4:30 pm, and know its true meaning in the global view of time.

    What about time zones?

    So we can all wake about the same time to a beautiful sunrise, we can all eat around noon, go out for drinks at sunset, or an 8:00 pm movie and it most likely be dark, time zones were created as chunks of consistent time as our planet rotates around it’s axis.

    image

    This seems to make things relatively easy, right? Everyone can refer to 9-5 as common work hours.

    What about Daylight Savings Time

    Time zones would have been fine if the earth were spinning consistently related to the sun. However, as we happen to spin around and around our own axis, we also spin around the sun.

    clip_image002And, we spin at a slight enough angle that the sun doesn’t always rise at 6am within a given time zone. In 1916, daylight savings time was started where once a year we’d spring ahead, or fall back an hour to try and get the sun to rise and fall about the same time.
    To make things a bit worse daylight savings time changed in 2007 when it was felt it would improve our energy consumption.
    All of this was fine, when we lived in little towns, didn’t really connect with others across the globe instantly. Even in modern times, when apps were islands upon themselves on each of our “PCs”, possibly connected via floppynet, it wasn’t a problem. When the corporate office was located in Dallas, we all knew that time was relevant to Dallas time. But, in this new world, where there may no longer be a corporate office, or the data center may no longer be located in the basement of the corporate office, we need a better solution.

    Problems, problems, but what to do?

    In 2008, SQL Server 2008 and .NET Framework 3.5 SP1 introduced a new Date Time datatype called DateTimeOffset. This new type aims to balance the local time relevance humans seek, with the global time needs for our applications to function in a constant.
    DateTimeOffset allows humans to view a date & time as we think about it locally, but it also stores the offset relative to UTC. This combination supports a constant, and allows apps to reason over time in their time zone.

    An Example

    Assume we’re located in the Redmond WA office. It’s 4:35 pm. According to our call log, our upset customer called at 4:30pm. If we store this without the offset, we have no idea how this relates to where we are right now. Did the NY, Redmond or London office take the call? If the user that saved the value was on the east coast, and it used their local time, it would store 4:30pm and -5 as the offset.
    Using this combination of time and offset, we can now convert this to west coast time. The app pulls the time from the database. It calculates Pacific Time which is UTC -8, and subtracts another 3 hours (ET (UTC -8) – PT (UTC -5)). Our customer called at 1:30pm Pacific time. That’s 3 hours ago, and there’s no log of activity. Our customer is likely very upset. However, if it were stored as 4:30 pm – 8 (Pacific Time), the customer called just 5 minutes ago, and we can make sure someone from service is tending to their outage.

    LightSwitch, Cloud Business Apps and Global Time

    With a focus on cloud apps, Visual Studio 2013 LightSwitch and Cloud Business Apps now support the DateTimeOffset data type.

    image

    Apps can now attach to databases and OData sources that use DateTimeOffset, and you can now define new intrinsic databases with DateTimeOffset.

    Location, Location, Location

    When building apps, there are 3 categories of use for the DateTimeOffset data type:

    • Client Values
      Values set from the client, user entered, or set in javascript
       screen.GlobalTime.ClientValue = new Date();
    • Mid-Tier Values
      Set through application logic, within the LightSwitch server pipeline
       partial void GlobalTimes_Inserting(GlobalTime entity){
      entity.MidTierValue = DateTimeOffset.Now;
      }
    • Created/Modified Properties
      LightSwitch and Cloud Business Apps in Visual Studio 2013 now support stamping entity rows with Created/Modified properties. These values are set in the mid-tier.
      A future blog post will explain these features in more detail

    Because the tiers of the application may likely be in different time zones, LightSwitch has slightly different behavior for how each category of values are set.

    • Client Values
      No surprise, the client values utilize the time zone, or more specifically the UTC Offset of the client. This obviously depends on the devices time, which can be changed, and does change on cell tethered devices as you travel across time zones.
    • Mid-Tier Values
      Code written against the mid-tier, such as the Entity Pipeline, uses the UTC Offset of the servers clock. This is where it gets a little interesting as the clock varies. In your development environment, using your local machine, it’s going to be your local time zone. For me, that’s Redmond WA UTC-8.  If you’re publishing to an on-premise server, your datacenter may likely use the local time zone as well. In our Dallas TX example, that would be UTC –6.  However, if you’re publishing to Azure, the servers are always in UTC-0 regardless of the data center. This way your apps will always behave consistently, regardless of where the servers are located.
    • Created/Modified Properties
      We considered making these use the Server time zone, but felt it would be better to be consistent regardless of where the server was located. This solves the problem for data that may span on-prem and the cloud. Created/Modified values are always UTC -0
    A Quick Walkthrough

    Using the above example, let’s look at how these values would be stored in the database, and viewed in the app from New York and Redmond WA

    We’ll create a Entity/Table with the following schema:

    image

    Under the covers, LightSwitch will create a SQL Table with the following schema.

    image

    Just as noted above, we’ll create a screen, and enter values in the ClientValue on the Client, the MidTierValue on the MidTier, and LightSwitch will automatically set the Created property.

    For the sake of simplicity, I normalized the values and removed the variable for how fast someone could type and press save on the client and have it be the exact time on the server. Reality often confuses a message.

    Let’s assume its 1:30:30 PM on 9/19/2013, which is Daylight savings time in the pacific northwest. What values would be set for our 3 Properties?

    image

    Notice the Created is always in UTC – 0. The mid tier uses the time of the server. And the browser client, displays all values consistently as the UTC Offset normalizes the times regardless of the time zone in which they were captured.

    What date is it?

    That wasn’t so bad, just a little math on the hour. However, let’s assume we’re working late that same night, and it’s now 10:30 PM on 9/19/2013 in Redmond WA. We’re still in daylight savings time, but what do you see different here?

    image

    Although it’s 9/19 in Redmond, New York is 3 hours ahead. It’s now 1:30 AM 9/20. Also notice that our Azure servers are also working in 9/20.

    Standing on the edge

    It’s now the 2nd Sunday in March 2014. March 9 to be specific. This is the night before daylight savings time begins. Lets see what happens

    image

    In this case, not only are the times split across dates between New York and Redmond WA, but we’ve also crossed into daylight savings time. Instead of New York being 3 hours ahead, it’s actually 4 hours ahead, for the same point in time. Ouch…

    My Head Hurts

    Attempting to calculate all these different time zones, daylight savings time – if it even applies to your time zone, can certainly be confusing. So, what to do? Well, that of course depends. However, just as we learned in math, we need to find the least common denominator. In most cases, if you convert to UTC, you can avoid the variables. You can use TimeSpan to calculate the differences in two DateTime periods. Then, you can re-apply the UTC offset. However, none of this would be possible if the values you’re attempting to calculate didn’t include the UTC Offset.

    Thus, LightSwitch, and Cloud Business Apps now support this important element for you to calculate Dates & Times across your globally deployed application.

    What about the riddles?

    Ahh, you’re still here?

      • What time never occurs?
        2:00 am on the 2nd Sunday of March
        As of 2007, the 2nd Sunday in March begins daylight savings time. At 2:00 am, the clocks will “leap” forward to 3:00 am skipping the 2:00 am hour all together
      • What time happens twice a year?
        1:00 am, the first Sunday of November
        As of 2007, the first Sunday in November ends daylight savings time. At 2:00 am, the clocks roll back to 1:00 am to end daylight savings time.
    Some references

    Paul van Bladel (@paulbladel) described The simplest way to upload files to a Lightswitch application in a 9/20/2013 post:

    imageIntroduction

    In the previous post I covered downloading files. In order to be able to test this we need upload functionality.

    There are 2 options:

    1. image_thumb1211_thumb_thumbUpload the file via the table mechanism
    2. Upload the file via web-api

    Here is the sample solution: http://code.msdn.microsoft.com/vstudio/Simple-web-api-based-file-5e0c5844

    Upload a file via the table mechanism

    Since I’m a plumber, I leave fancy UI stuff to more UI talented people. My upload screen, simply has a label and a button. The label is for displaying the file name and the button is for doing the file selection:

    uploadviatablescreen

    The button is a custom control but a very simple one, no xaml here.

    button

    The reason why we add a button as custom control is for accommodating to an annoying Silverlight security restriction, which I will not try to fully explain over here.

    The view model looks as follows:

    tableviewmodel

    A hundred or two lines of source code elided for brevity.


    Frans Bouma (@FransBouma) reported ORM Profiler v1.5 beta has been released! on 9/5/2013 (missed when published):

    Yesterday we released the first beta of ORM Profiler v1.5! The new features are:

    • imageEntity Framework 6 support
    • Real time analysis:
           New analysis types:
      • Number of connections / sec exceeds a given threshold
      • Number of commands / sec exceeds a given threshold
      • Transaction time exceeds a given threshold (also new in snapshot analysis)
      • Connection open time exceeds a given threshold (also new in snapshot analysis)

          The following analysis was already present in ORM Profiler and is now also done in real-time:
      • DB time of command exceeds a given threshold
      • .NET consume time of command exceeds a given threshold
      • An Exception occurs
      • The resultset is too large
      • Too many commands per usage period
      • Massive sql statement
    • Resultset Retrieval in-client
    • Minor enhancements/changes:
      • Settings are now persisted to disk
      • Additional snapshot analysis alerts: (these are introduced in the real-time analysis) 
        • Transaction time exceeds threshold
        • Connection open time exceeds threshold
      • Better time aggregation in hierarchical views Command db time/.net time is now aggregated to the complete tree of nodes instead of just its direct parent.
      • Retrieving the plan now uses a quick connection open
    • Async support (this is already enabled in the latest v1.1 build, but in case you missed it)

    image_thumb_thumb_thumb_thumb_thumbv1.5 is a free upgrade for all v1.x licensees and can be downloaded from the customer area on http://www.ormprofiler.com. Please read the enclosed readme1st.docx how to proceed/install.

    Frans also announced 10 years of LLBLGen Pro on 9/9/2013.


    <Return to section navigation list>

    Cloud Security, Compliance and Governance

    • Barbara Darrow (@gigabarb) reported Microsoft gooses Azure security with multifactor authentication in a 6/26/2013 post to GigaOm’s Cloud blog:

    imageMicrosoft checked off another checklist item for Windows Azure when it turned on multifactor authentication on Thursday.

    Multifactor authentication requires that a user put in her password or code as the first step but then adds another step to the process. One of my credit card companies, for example, requires me to get an additional passcode via voicemail or text, that I must also key in to access my account information.

    It’s that extra layer of security that Microsoft is adding here. Per a blog post by Steve Martin, GM for Windows Azure:

    imageMulti-Factor Authentication quickly enables an additional layer security for users signing in from around the globe. In addition to a username and password, users may authenticate via: 1) An application on their mobile device. 2) Automated voice call. 3) Text message with a passcode. It’s easy and meets user demand for a simple sign-in experience.

    imageMicrosoft charges either $2 per user per month or $2 per 10 authentications for this service.

    Given the concern around data privacy and security, this is an important addition to Microsoft’s public cloud portfolio. Amazon, the leader in public cloud, has offered multifactor authentication for some time and Google added server-side encryption to its Google Cloud Platform earlier this summer.

    After some high-profile hacking incidents, the pressure is on even for consumer-oriented services (e.g. Twitter) to add multi-factor encryption going forward, so stay tuned.

    Related research and analysis from GigaOM Pro:
    Subscriber content. Sign up for a free trial.

    Full disclosure: I’m a registered GigaOm analyst.


    Nuno Godinho (@NunoGodinho) posted Lessons Learned Building Secure and Compliant solutions in Windows Azure on 9/11/2013 (missed when published):

    imageIn July, I decided to create a series of 3 posts about this topic. Those 3 posts are be:

    In this post I’ll be focusing on the last part, which are the lessons learned.

    Quick Concepts

    image_thumb_thumbWhen we think about compliance and security there are two concepts we need to consider and master. Those concepts are Data in Transit and Data at Rest. But what is this all about?

    Data at Rest

    This refers to inactive data which is stored physically in any digital form (e.g. databases , data warehouses, spreadsheets, archives, tapes, off-site backups, mobile devices etc.). In addition, subsets of data can often be found in log files, application files, configuration files, and many other places.

    imageBasically you can think of this as data which is stored on a place which will be able to be retrieved even in case of a restart.

    Data in Transit

    This is commonly delineated into two primary categories:

    • Data that is moving across public or “untrusted” networks such as the Internet,
    • Data that is moving within the confines of private networks such as corporate Local Area Networks (LANs)

    When working in with compliant solutions you always need to have this two in consideration because those will be the two topics that the compliances will focus on.

    Lessons Learned

    In order to make sure that the solution is “acceptable” from a Data Privacy & Compliance aspect I normally use the following process, that I would like to share with you.

    1. Perform an assessment on the organizational structure in order to understand all the information of where the business is being conducted, and which laws and compliances apply.
      • This is extremely important because if we work the Betting & Gaming industry we might find that they are located in one place but have their gateways on a different one, like Malta, Gibraltar and so on. By understanding this we will be able to understand exactly which compliances should be followed and which ones we should ignore.
      • The same thing applies for example to the Healthcare industry where you have HIPPA compliance but which is important to understand where the company that builds the product is, as well as doing the same for their customers, since different countries will have different compliance requirements.
    2. Understand which countries both the customer and software vendor is located. This will help understand which rules apply to that specific organization and plan for that.
    3. Identify the specific data you need to encrypt or you need to avoid moving into the cloud because of compliance issues.
      • This is an extremely complex exercise because you can’t say on a high level that all the data can or can’t go to the cloud, you need to dig into the compliance and understand exactly which fields can’t be.
      • For example, in the Healthcare industry you have HIPPA compliance which you have to comply with, but you also have to work with both PII (Personal Identifiable Information) and PHI (Personal Health Information), which can’t be in the Cloud at this stage. So normally you hear people saying immediately that this application cannot move into the cloud. That isn’t actually true. If you go and analyze the PHI and PII details you will see that the health information can be anywhere as long as it is not possible to match to which person that information is related to. If you look at it, this isn’t actually that hard to do. You can anonymize the data and place “Patient A” and the full health history in the cloud, do the necessary processing and then just send the information back to on-premises where you have a small table that correlates “Patient A” with the real patient information so doctors can work with.
    4. After understanding everything that is required in terms of requirements and compliances which are applicable to the solution, you need to look at where your Data at Rest is currently being stored inside your customer data center.
      • Databases
      • File Servers
      • Email Systems
      • Backup Media
      • NAS
    5. Now you should locate your Data in Transit across the network channels both internal and external. You should:
      • Assess the data trajectory
      • Assess how data is being transferred between the different elements of the network
    6. Decide how to handle Sensitive Data. There are several options you might take to handle this data.
      • Eradication
      • Obfuscation/Anonymization
      • Encryption
      • Note: Normally we go more with the Encryption option but anonymization is also really important and in some cases the only way to go. For example look at the PII and PHI. Anonymization would be the way to go there.

    If you follow this simple process you will definitely be successful identifying what needs to be handles and how it needs to be handle and make your compliant solutions able to be moved to Windows Azure.


    Nuno Godinho (@NunoGodinho) continued his series with Introduction to Windows Azure Compliance on 9/9/2013 (missed when published):

    imageIn July I  decided to create a series of 3 posts about this topic. Those 3 posts are be:

    In this post I’ll be focusing on the Windows Azure compliance part.
    Introduction to Windows Azure Compliance

    imageCompliance is extremely important when moving/building solutions to the cloud for two main reasons. First because it will provide us with with an understanding of the type of infrastructure that is underneath the cloud offering. Secondly because there are several different solutions and companies which require specific compliances in order to be approved for deployment.

    In order to achieve this Windows Azure Infrastructure provides the following compliances:

    image

    imageISO/IEC 27001:2005

    Specifies a management system that is intended to bring information security under explicit management control” by Wikipedia. More information here.

    This is extremely important because it provides us a clear information about how secure our data will be inside Windows Azure.

    SSAE 16/ISAE 3402 SOC 1, 2 and 3

    “Enhancement to the current standard for Reporting on Controls at a Service Organization, the SAS70. The changes made to the standard will bring your company, and the rest of the companies in the US, up to date with new international service organization reporting standards, the ISAE 3402 by SSAE-16.com. More information here.

    Extremely important to understand that Windows Azure is audited and has to follow strict rules un terms of reporting to make it compliance. This give us a view that everything has a specific process that needs to be followed.

    HIPPA/HITECH

    “The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the American Recovery and Reinvestment Act of 2009, was signed into law on February 17, 2009, to promote the adoption and meaningful use of health information technology.” by hhs.gov. More information here.

    By having this HIPPA compliance it means that solutions for the healthcare industry can be delivered in Windows Azure because the underlying infrastructure is already HIPPA compliant. This doesn’t mean that anything we do now is HIPPA compliant, it just means that Windows Azure can be used to deploy the solution, but the solution still needs to comply with the rest of the HIPPA compliance, mainly the software compliance part.

    PCI Data Security Standard Certification

    Created to increase controls around cardholder data to reduce credit card fraud via its exposure. Validation of compliance is done annually — by an external Qualified Security Assessor (QSA) that creates a Report on Compliance (ROC)[1] for organizations handling large volumes of transactions, or by Self-Assessment Questionnaire (SAQ) for companies handling smaller volumes.[2]” by Wikipedia. More information here.

    This doesn’t mean that we can deploy PCI compliant solution in Windows Azure, because this certification is only for the way Windows Azure uses to accept payment, and not for allowing 3rd party applications.

    FISMA Certification and Accreditation

    Assigns specific responsibilities to federal agencies, the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) in order to strengthen information system security. In particular, FISMA requires the head of each agency to implement policies and procedures to cost-effectively reduce information technology security risks to an acceptable level.[2]

    According to FISMA, the term information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity, confidentiality and availability.” by Wikipedia. More information here.

    Windows Azure Compliance Roadmap

    But Windows Azure has other compliances and so here is the complete roadmap.

    image

    Summary

    What all this means if that Windows Azure is a secure and highly compliant option, which will allow us to leverage the Cloud on several different occasions.

    The Windows Azure team has a Trust Center which will give you all the information about Security, Privacy and Compliance.


    <Return to section navigation list>

    Cloud Computing Events

    ‡ Sam Vanhoutte presented Connecting the Cloud with your local applications to CloudBurst attendees on 9/19/2013 in Stockholm, Sweden:

    In the new scenarios where cloud is getting used, integration becomes very important. Luckily, the Windows Azure platform provides a lot of different capabilities and services to make a secure link between your local systems and the Windows Azure services or machines. In this session, an overview will be give of the different technologies and the scenarios to which these technologies are best applicable. The following technologies will be demonstrated and discussed: * Connectivity on messaging level: Service Bus Messaging * Connectivity on service level: Service Bus Relay * Connectivity on data level: SQL Data Sync * Connectivity on network level: Windows Azure Virtual Networking * Connectivity on security level: Active Directory integration


    • Beth Massi (@bethmassi) announced on 6/26/2013 that she'll present sessions about Cloud Business Apps & Visual Studio 2013 at Silicon Valley Code Camp! on 10/5 and 10/6/2013:

    imageI’ll be speaking again this year at the biggest code camp in the world – Silicon Valley Code Camp – on October 5th & 6th. I’ve been speaking there for years and it’s always a well run and very well attended code camp.
    If you’ve never attended and you’re in the San Francisco Bay Area I *highly* encourage you to attend. There are over 200 sessions and 3000 people registered! What are you waiting for?

    My Sessions

    This year I have two sessions on building HTML5-based mobile business apps. One will focus on how to build a data-centric HTML client and deploy it to the cloud (Azure) using LightSwitch, and the other will focus on the new Office 365 Cloud Business App project in Visual Studio 2013, which streamlines the way you build custom SharePoint 2013 business apps.

    Please click on the sessions that interest you and let the organizers know if you’re interested in attending so they can help plan room sizes.

    Building HTML5-based Business Apps on Azure with Visual Studio LightSwitch
    9:45 AM Saturday Oct 5th

    imageVisual Studio LightSwitch is the easiest way to create modern, data-centric, line of business applications for the enterprise. In this demo-heavy session, we will build and deploy end-to-end, a full-featured business app that runs in Azure and provides rich user experiences tailored for modern devices. We’ll cover how LightSwitch helps you focus your time on what makes your application unique, allowing you to easily implement common business application scenarios—such as integrating multiple data sources, data validation, authentication, and access control. We’ll cover complex business rules and advanced data services for facilitating custom mobile reporting dashboards. You will also see how developers can use their knowledge of HTML5 and JavaScript to customize their apps with custom controls, client-side logic

    Developing Office 365 Cloud Business Apps with Visual Studio 2013
    11:15 AM Saturday Oct 5th

    Office 365 is an ideal business app platform providing a core set of services expected in today’s business apps like presence, social, integrated workflow and a central location for installing, discovering and managing the apps. Office 365 makes these business apps available where users already spend their time – in SharePoint & Office. Visual Studio 2013 streamlines the way developers build modern business applications for Office 365 and SharePoint 2013 with the Office 365 Cloud Business App project. In this demo-heavy session, you’ll see how developers can build social, touch-centric, cross-platform Office 365 business applications that run well on all modern devices.

    Meetup Logo Swap / Sponsor Program

    I am speaking at silicon valley code camp. Please come to my session!  Click here for details.Do you organize a special interest group on Meetup.com? If your meetup sponsors code camp (at no cost to anyone of course), then people at your meetup will know about code camp, and your meetup logo and link will be shown on practically every code camp page (in the sponsor area).  Last year the SVCC site had 200,000+ page views over the month!

    Want some free advertising? Follow the simple directions on this page to participate!


    • Doug Mahugh (@dmahugh) posted Announcing the first Node hackathon in Redmond, November 7-8 to the Interoperability @ Microsoft blog on 9/26/2013:

    imageThe MS Open Tech team has been working with the Node.js community for more than two years to deliver a great experience on Windows and Windows Azure for Node developers. It’s been an exciting and rewarding experience, and we’re looking forward to taking it to the next level as we continue the journey together.

    To that end, we’re happy to announce the first Node/Windows Hackathon, sponsored by Microsoft Open Technologies, Inc. This event will take place in Redmond on November 7-8, 2013, at the new “Garage” facility currently under construction in building 27 of the Microsoft campus. The event is open to everyone. We’ll be sharing more details in the next few days, but we’re announcing the dates now so that you can reserve the date and make plans to participate.

    imageThis will be a great opportunity for the Node community to get to know the many Microsoft developers who love to work with Node.js as much as they do, and we’ll work together to test new scenarios, explore new features, and make the Node experience even better for Windows and Windows Azure developers. There will be plenty of pizza and beverages, lots of time for hacking as well as socializing, and we’re planning a surprise announcement at the event that we think will make Node developers on Windows very happy.

    Please sign up at the EventBrite registration page and get involved if you’d like to participate, or have suggestions for projects and scenarios to explore.  We’d love to see you in Redmond for the event, but if you can’t be there in person we’ll also have opportunities for online attendance. (Details for online participation will be posted soon.)


    The Microsoft Server and Cloud Platform Team announced Microsoft Delivers at Oracle OpenWorld 2013 on 9/23/2013:

    imageOracle OpenWorld 2013 kicks off today and it might surprise you to learn how many great Microsoft sessions will be delivered.  Oracle products run on the platform that make up the Microsoft Cloud OS vision, so it’s important for us to give you great information on our latest innovations, and how partner products integrate and execute well.

    For starters, you might want to rewind a couple months back and see Brad Anderson’s blog post “After Today, Cloud Computing is No Longer a Spectator Sport”.  It establishes some of the groundwork for the What’s New In Windows Server 2012 R2 series, also on his blog.

    If you’ll be attending OpenWorld in San Francisco, find your way to our booth for an opportunity to see the Cloud OS in action - through both interactive and guided demos. Product experts will be on hand to answer your public and private cloud puzzlers.

    image_thumb75_thumb3_thumb_thumb_thu[8]Then, make your way to one of our sessions to get first-hand vision and instruction.  Here are some sessions we think you’ll find valuable:

    • Microsoft and Oracle: Partners in the Enterprise Cloud - The Cloud OS is Microsoft’s comprehensive cloud computing vision that that leverages Microsoft’s unmatched legacy of running the world’s highest-scale online services and most advanced datacenters to deliver a modern platform for the world’s applications and devices.  Join Brad Anderson, Corporate Vice President of Windows Server and System Center Program Management as he showcases how Microsoft and Oracle are working together to help customers embrace cloud computing by improving flexibility and choice while also preserving first class support for mission-critical workloads. Presented by Microsoft VP Brand Anderson on 9/24 at 1:30pm in Moscone North - Hall D
    • Traversing the Public and Private Cloud with Windows Azure and Windows Server Hyper-V - Attend this session to learn how you can have your cake and eat it too—moving virtual machines from your own data center to the cloud and back. The presentation discusses some of the factors that go into deciding whether to use the cloud for an Oracle Database deployment and what scenarios benefit from a combination deployment across public and private cloud environments. Presented by Steven Martin and Mike Schutz on 9/24 at 3:45pm in Moscone West – 2010.
    • Windows Azure: What’s New in the Microsoft Public Cloud - Get familiar with how you can use Windows Azure like an extension of your own data center by running Oracle software in this public cloud environment. We’ll explore common scenarios where Windows Azure has proven value, and provide guidance for getting the most out of Windows Azure. Presented by Steven Martin on 9/25 at 10:15am in Moscone South – 250.

    There’s even more to see. Search the sessions below in the Oracle Openworld 2013 content catalog for more details:

    • Windows Server and Hyper-V Highlights – Best Platform for your Oracle Workloads (Jeff Woolsey)
    • Developing Java Apps on Windows Azure (Gianugo Rabellino)
    • Panel: Building and Managing a Cloud Infrastructure Built on Oracle WebLogic Server (Gianugo Rabellino)
    • Java on Windows Azure: Tips, Tricks, and Examples (Brian Benz – MS Open Tech)

    For those of you that want to try Windows Server 2012 R2, you can download the Preview and RTM bits.  See the Microsoft.com Windows Server 2012 R2 area for more information.  See the TechNet Evaluation Center for previews and downloads of other Microsoft products like System Center, SQL Server, and Microsoft Exchange.

    We hope you enjoy the Microsoft sessions at OpenWorld!

    See the Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN section above for Windows Azure content at Open World.


    R. Ray Wang (@rwang0) posted Event Report: Day 1 At Oracle Open World 2013: The Quest For Innovation #oow13 on 9/22/2013:

    Past Oracle Open Worlds Have Disappointed Customers and Partners

    R Ray WangLet’s be frank.  The past five years at Oracle Open World have disappointed even the faithful.   The over emphasis on hardware marketing and revisionist history on cloud adoption bored audiences.  The $1M paid advertorial keynotes had people walking out on the presenters 15 minutes into the speech.  Larry Ellison’s insistence on re-educating the crowd on his points subsumed the announcements on Fusion apps.   Even the cab drivers found the audience tired, the show even more tiring.

    imageOracle went from hot innovative must attend event to has been while most industry watchers, analysts, and media identified shows such as Box’s BoxWorks, Salesforce.com’s DreamForce, and Exact Target’s Connections as the innovation conferences in the enterprise.  These events such as Constellation’s Connected Enterprise, capture not only the spirit of innovation but also provide customers a vision to work towards.  Hence, most believe Open World could use much needed rejuvenation and a shot of innovation juju (see Figure 1.)

    Figure 1. Oracle Open World Lights Up San Francisco From September 22nd to September 27th

    Read Ray’s full OpenWorld reports here.


    David C. Chou (@davidcchou) listed Windows Azure Developer Camps 2013Q4 in a 9/19/2013 post:

    image

    Join a Microsoft Developer Camp and learn critical skills that all modern developers need to know.

    imageRoll up your sleeves at this free, one-day, instructor-led workshop where you can learn the critical skills that all modern developers need to know. We will start with basic Microsoft Modern Platform principles and build up to more advanced topics. Instructor-led, hands-on labs will focus on:

    • Updating an App to a Modern Architecture
    • Modern Dev & Test Practices
    • Configuring and Testing a Modern Application

    Throughout the day, you’ll hear from local Windows Azure partner specialists and Microsoft product team members.
    We’ll talk about the best way to take advantage of modern platforms and tools, as well as how to fix the worst sins of testing. Developers of all languages are welcome!

    Be fully prepared for this hands-on day of coding by bringing your laptop and signing up for the free Windows Azure Trial.

    Events are available across 30 cities in the U.S. Visit the event site for more details - http://www.microsoft.com/enterprise/events/make-it-happen.htm. Events local to western U.S. are:

    clip_image001


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Barb Darrow (@gigabarb) reported “After two weeks of public silence, Nirvanix confirms what it had already told its customers: it’s shutting down” in a summary of her Nirvanix fesses up: “It’s true. We’re gone” article of 9/27/2013 for GigaOm’s Cloud blog:

    Two weeks after reports surfaced that Nirvanix was closing its cloud storage business, the company has broken it’s silence and acknowledged that it is, in fact, shutting down. The news was posted Friday night to the company’s web site,  which was otherwise been wiped clean, (see the screen grab below.)

    Customers were referred to IBM SoftLayer — a Nirvanix partner.  IBM had already told GigaOM it was working to transition customers over. Meanwhile, nearly every other cloud player in the universe has been circling to scoop up Nirvanix customers.

    imageEarlier this week, Leo Leung, VP of marketing of Oxygen Cloud, a cloud broker service, said Oxygen had successfully migrated several joint customers to other cloud providers. One was a real estate company with several terabytes of data. (He wrote about the issue on his blog.)

    Rackspace offered Nirvanix refugees free data migration to Rackspace Cloud Files, along with a month of free storage (well up to $1,500 anyway.) HP and Panzura also did some customer migrations.

    imageSome said customers would be crazy to trust their data post-Nirvanix to anyone but the biggest cloud storage providers. To Andres Rodriguez, CEO of Nasuni, a company that manages enterprise cloud storage, that means Amazon S3 and Microsoft Windows Azure. [Emphasis added.]

    What concerned Nirvanix customers — and spooked others — is that the company gave so little notice, initially just two weeks, to move their stuff. In Friday’s statement Nirvanix extended that another two weeks till October 15.  Still, that’s not a lot of time to provision and move a lot of data storage.

    imageIt also put the scare into people that other cloud startups that appear to be well funded may not be all that solid after all. Nirvanix  itself had raised about $70 million in venture funding including a $25 million round just six months ago.

    It makes you wonder what other cloud companies are on the cusp.

    nirvanixdoa

    Related research

    Subscriber Content comes from GigaOM Pro, a revolutionary approach to market research without the high price tag. Visit any of our reports to subscribe.

    Full disclosure: I’m a Registered GigaOm Analyst.


    Jeff Barr (@jeffbarr) reported Custom Error Pages and Responses for Amazon CloudFront on 9/23/2013:

    imageAmazon CloudFront distributes dynamic and static web content produced by an origin server to viewers located anywhere in the world. If the user requests objects that don't exist (i.e., a 404 Not Found response) or an unauthorized user might attempt to download an object (i.e., a 403 Forbidden response), CloudFront used to display a brief and sparsely formatted error message:

    image_thumb311_thumb_thumbToday we are improving CloudFront, giving you the ability to control what's displayed when an error is generated in response to your viewer's request for content. You can have a distinct response for each of the supported HTTP status codes.

    The CloudFront Management Console contains a new tab for Error Responses:

    Click on the Create Custom Error Response button to get started, then create the error response using the following form:

    You can create a separate custom error response for each of the ten HTTP status codes listed in the menu. The Response Page Path points to the page to be returned to signify the response. For best results, point this to an object in an Amazon S3 bucket. This will prove to be more reliable than storing the pages on the origin server in the event that the server returns any of the 5xx status codes.

    You can also choose the HTTP status code that will be returned along with the response page (in most cases you'll want to use 200):

    Finally, you can set the Error Caching Time To Live (TTL) for the error response. By default, CloudFront will cache the response to 4xx and 5xx errors for five minutes. You can change this value as desired. Note that a small value will cause CloudFront to forward more requests to the origin server; this may increase the load on the server and cause further issues.

    Your origin server can also control the TTL by returning Cache-Control or Expires headers as part of the error response.


    Kevin Kell panned GCE in a Google Compute Engine Revisited post of 9/7/2013 to the Learning Tree blog (missed when posted):

    imageIt has been awhile since I have written anything about Google Cloud Computing. I started to take a look at Google Compute Engine over a year ago but I was stopped because it was in limited preview and I could not access it. It looks like GCE has been made generally available since May so I thought I’d check back to see what has happened.

    To use GCE you sign into Google’s Cloud Console using your Google account. From the Cloud Console you can also access the other Google cloud services: App Engine, Cloud Storage, Cloud SQL and BigQuery. From the Cloud Console you can create a Cloud Project which utilizes the various services.

    Figure 1. Google Cloud Console

    imageUnlike App Engine, which lets you create projects for free, GCE requires billing to be enabled up front. This, of course, will require you to create a billing profile and provide a credit card number. After that is done you can walk through a series of steps to launch a virtual machine instance. This is pretty standard stuff for anyone who has used other IaaS offerings.

    Figure 2. Creating a new GCE instance

    The choice of machine images is certainly much more limited than other IaaS vendors I’ve used. At this time there seems to be only four available and they are all Linux based. Probably Google and/or the user community will add more as time passes. It is nice to see the per-minute charge granularity which, in actual fact, is based on a minimum charge of 10 minutes and then 1 minute increments beyond that. The smallest instance type I saw, though, was priced at $0.115 per hour which makes GCE considerably more expensive than EC2, Azure and Rackspace. When you click the Create button it only takes a couple of minutes for your instance to become available.

    Connecting to the instance seemed to me to be a little more complicated than other providers. I am used to using PuTTY as my ssh client since I work primarily on a Windows machine. I had expected to be able to create a key pair when I launched the instance but I was not given that option. To access the newly created instance with PuTTY you have to create a key pair using a third party tool (such as PuTTYgen) and then upload the public key to GCE. You can do this through the Cloud Console by creating an entry in the instance Metadata with a key of sshKeys and a value in the format <username>:<public_key> where <username> is the username you want to create and <public_key> is the actual value of the public key (not the filename) you create. This can be copied from the PuTTYgen dialog. A bit of extra work but arguably a better practice anyway from a security perspective.

    Figure 3. Creating Metadata for the public key

    After that is done it is straightforward to connect to the instance using PuTTY.

    Figure 4. Connected to GCE instance via PuTTY

    At this point I do not believe that Google Compute Engine is a competitive threat to established IaaS providers such as Amazon EC2, Microsoft Azure or Rackspace. To me the most compelling reason to prefer GCE over other options would be the easy integration with other Google cloud services. No doubt GCE will continue to evolve. I will check back on it again soon. [Emphasis added.]


    <Return to section navigation list>