Friday, April 19, 2013

Windows Azure and Cloud Computing Posts for 4/15/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

imageTop News This Week: Scott Guthrie (@scottgu) announced General Availability of Infrastructure as a Service (IaaS) for Windows Azure on 4/16/2013. See additional coverage in the Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN and Windows Azure Infrastructure and DevOps sections below.

imageThe Windows Azure Team (@WindowsAzure) reported in a 4/18/2013 5:00 PM PDT email: "The Windows Azure VM Role preview is being retired on May 15. Please transition to Windows Azure Virtual Machines." See full details in the Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN section below.

Updated 3/19/2013 with new articles marked .
•    Updated
3/18/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, HDInsight and Media Services

•• Todd Hoff (@toddhoffious) described Tachyon - Fault Tolerant Distributed File System with 300 Times Higher Throughput than HDFS in a 4/17/2013 post to his High Scalability blog:

imageTachyon  (github) is interesting new filesystem brought to by the folks at the UC Berkeley AMP Lab:

Tachyon is a fault tolerant distributed file system enabling reliable file sharing at memory-speed across cluster frameworks, such as Spark and MapReduce.It offers up to 300 times higher throughput than HDFS, by leveraging lineage information and using memory aggressively. Tachyon caches working set files in memory, and enables different jobs/queries and frameworks to access cached files at memory speed. Thus, Tachyon avoids going to disk to load datasets that is frequently read.

It has a Java-like File API, native support for raw tables, a pluggable file system, and it works with Hadoop with no modifications.

It might work well for streaming media too as you wouldn't have to wait for the complete file to hit the disk before rendering.

Discuss on Hacker News

Sounds like an interesting alternative to HDFS and [Windows] Azure Storage Vaults (ASVs).


• See the Jeff Barr (@jeffbarr) announced Local Secondary Indexes for Amazon DynamoDB in a 4/18/2018 post article in the Other Cloud Computing Platforms and Services section at the end of this post.


• Michael Washam (@MWashamMS) reported Set-AzureStorageAccount incorrectly sets Geo-Replication States in a 4/17/2013 post:

imageThere has been a bug identified in the Set-AzureStorageAccount cmdlet that could inadvertently enable or disable your storage account geo-replication settings.

Those who have used this cmdlet should check the geo-replication for their accounts in the Azure Portal immediately.

image_thumb75_thumb1Note: For more information on Geo-Replication in Windows Azure Storage please visit the following post: http://blogs.msdn.com/b/windowsazurestorage/archive/2011/09/15/introducing-geo-replication-for-windows-azure-storage.aspx

imageScenario 1: Changing the label or description of your storage account without specifying -GeoReplicationEnabled will disable geo-replication

PS C:\> Set-AzureStorageAccount -StorageAccountName mwweststorage -Label “updated label”

StorageAccountDescription : 
AffinityGroup :
Location : West US
GeoReplicationEnabled : False
GeoPrimaryLocation : West US
GeoSecondaryLocation :
Label : "updated label"
StorageAccountStatus : Created
StatusOfPrimary :
StatusOfSecondary :
Endpoints : {http://mwweststorage.blob.core.windows.net/, http://mwweststorage.queue.core.windows.net/,
http://mwweststorage.table.core.windows.net/}
StorageAccountName : mwweststorage
OperationDescription : Get-AzureStorageAccount
OperationId : 8dc5e76c-e8ac-460f-a76c-5a0c6f96e2c6
OperationStatus : Succeeded
Scenario 2: Setting geo-replication to disabled in your storage account via PowerShell will actually enable it.

PS C:\> Set-AzureStorageAccount -StorageAccountName mwweststorage -GeoReplicationEnabled $false -Label “disabled geo replication”

StorageAccountDescription : 
AffinityGroup :
Location : West US
GeoReplicationEnabled : True
GeoPrimaryLocation : West US
GeoSecondaryLocation :
Label : "disabled geo replication"
StorageAccountStatus : Created
StatusOfPrimary :
StatusOfSecondary :
Endpoints : {http://mwweststorage.blob.core.windows.net/, http://mwweststorage.queue.core.windows.net/,
http://mwweststorage.table.core.windows.net/}
StorageAccountName : mwweststorage
OperationDescription : Get-AzureStorageAccount
OperationId : 8dc5e76c-e8ac-460f-a76c-5a0c6f96e2c6
OperationStatus : Succeeded
Mitigation Strategy:
  • From PowerShell
  • To enable: Pass GeoReplicationEnabled $true
  • To disable: Do not pass GeoReplicationEnabled
  • From the Windows Azure Management Portal
  • Click on your Storage Account
  • Click on Configure
  • Specify Geo Replication Settings Directly
What are we in the Windows Azure PowerShell Team(s) doing about it?

We will be releasing an updated version of the current release (0.6.13) to include a fix in the Set-AzureStorageAccount cmdlet in the very near future.
This will be a BREAKING CHANGE. It will be required when you set the storage account settings to enable or disable geo-replication.

image_thumb1


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

• Larry Franks (@larry_franks) posted Access Roaming Data in the Cloud to the [Windows Azure’s] Silver Linking blog on 4/17/2013:

imageThe following article was written and contributed to the blog by Katrina Lyon-Smith, a Senior Content Publishing Lead at Microsoft.

Roaming data in the cloud

Do you want to create an app that can store data in the cloud?
Do you want to access that data from any device running your app?
Do you want to update live tiles on devices running your app?

Then read on...

Let's say you want to create a sticky note app that lets you add a sticky note on your Windows 8 desktop computer. Then this sticky note is sent to a live tile on your Windows 8 phone and your Windows Surface, so you simply can't forget or ignore it! Windows Azure Mobile Services helps you easily put together Windows Store and Windows 8 phone apps that can do this.

Here are some things you should know before you get started.

Accounts

There are three accounts you need. Three? I know! But it's straightforward to sign up if you don't have them yet.

  1. You need a Microsoft account
  2. For access to the cloud: a Windows Azure account with Windows Azure mobile services enabled
  3. To register your apps: a developer account

You will use existing tutorials that show you how to create an Azure mobile service and put an app together to access roaming data.

Build Windows Azure Mobile Services Apps
Step 1: Create your mobile service app

imageUse this tutorial to create an Azure mobile service and an app that can access that service. This tutorial steps you through how to do this for a Windows Store app or a Windows 8 phone app. It creates a to do list app for you. You can then change this app to work for the data that you need to store in your SQL Azure tables for your Azure mobile service. If you use the tutorial, the correct references are added so that you can quickly learn how to work with your Azure mobile service.

Step 2: Authenticate a user

Use this tutorial to learn how to authenticate a user with Azure mobile services. The following identity providers are supported:

  1. Microsoft Account
  2. Facebook login
  3. Twitter login
  4. Google login

Enabling single sign on for your app makes it easier for users. To authenticate your user if they are already signed in on a device, use this tutorial.

Note: When you configure a Windows 8 phone app and get its client ID, you must specify that you are configuring a mobile app. Mobile apps use a different OAuth 2.0 authentication flow. Details are here.

Step 3: Only I can see my data

Now you are authenticating users, you need to make sure that each user only accesses their own data. I don't want to see your to do item to feed the dogs when I don't have any dogs. This tutorial shows you how to use server side scripts to do this.

Step 4: Update live tiles for your app

If you add a to do item using the app, you want that latest to do item sent to the live tile for any device.

To do this, use push notifications with your Azure mobile service. You have to find out the channel uri for the device where your app is running. Then add the logic to notify the user to your server side scripts. Because it is a server side script, it is used by any app that accesses the SQL Azure table.

You need to follow these two tutorials:

  1. Set up your Azure mobile service to send push notifications
  2. Add a table to store the channels and user ids to send out push notifications

Now your app pushes notifications to all channels in the channel table. In a real world app, you may only want to push notifications to those channels associated with the user that is running your app.

Continue with the next step to limit sending live tile notifications only to devices where a specific user is logged in.

Step 5: Update live tiles for a specific user

There are different push notifications for Windows Store apps and Windows 8 Phone apps. You can decide which one is best for your app. To send notifications to the channels that are associated with a specific user, you need to update the server side scripts from the Windows Azure Management Portal.

1. Update the server side script for insert for the channel table

You first need to update the server side script so that each channel uri only has one user associated with it.

This script adds a user id to the channel table. It checks if there is an existing channel for the uri. If there is, it checks if the user id is the current user id. If not, it updates the channel record for that uri with the current user id.

If there is no existing channel, then it adds a record to the table for that uri and user id.

function insert(item, user, request) { 
    

    item.userId = user.userId;

    var channelTable = tables.getTable('Channel');
    channelTable
        .where({ uri: item.uri})
        .read({ success: insertChannelorUpdate});

    function insertChannelorUpdate(existingChannels) {
        if (existingChannels.length > 0) {
            if (existingChannels[0].userId == user.userId)
                {request.respond(200, existingChannels[0]);}
            else
                { var channelItem = { id: existingChannels[0].id,     
                            uri: existingChannels[0].uri,
                            userId: user.userId };   
                    channelTable.update(channelItem);
                }
            }
          else { request.execute(); }     
         }
     }                                                

Next you need to update the server side script that inserts data in your table so that you can send a push notification when that happens.

2. Update the server side script for insert to your data table

This script only sends push notifications to the channels connected to that user when your data table has a record added. For example, you add an item "Pick up milk" and the text "Pick up milk" is pushed to the live tile for any device that is logged into the app as you.

This script pushes both a Windows 8 phone and a Windows store app notification to demonstrate both.

function insert(item, user, request) { 
    

    item.userId = user.userId;

    request.execute({
        success: function() {
            request.respond();
            sendNotifications(); }
         });

            function sendNotifications() {
                var channelTable = tables.getTable('Channel');
                channelTable.where({userId: user.userId}).read({
                    success: function(channels) {
                        channels.forEach(function(channel) {     
                            push.wns.sendTileSquareText02(channel.uri, {
                                    text2: item.text },   
                            {
                        success: function(pushResponse) {
                            console.log("Sent push windows store:", pushResponse);
                    }     
                });
                push.mpns.sendFlipTile(channel.uri, {                                   
                    title: item.text
                },{
                        success: function(pushResponse) {
                            console.log("Sent push windows 8 phone:", pushResponse); }
                    });
                    }
                );
            }
          });
        }
    }

Now you can access roaming data in the cloud with your app. You can also add functionality to notify a user of an event.


Brian Hitney (@bhitney) continued his DevRadio series with Microsoft DevRadio: (Part 5) Using Windows Azure to Build Back-End Services for Windows 8 Apps – Adding Push Notifications on 4/15/2013:

Abstract:
imageIn Part 5 of of their “Using Windows Azure to Build Back-End Services for Windows 8 apps” series Peter Laudati, Brian Hitney and Andrew Duthie  show us how to quickly add the ability to implement push notifications for his GameLeader Service using Azure Mobile Services. Check out the full article here.

Watch Part 1 | Part 2 | Part 3 | Part 4

image_thumb75_thumb2After watching this video, follow these next steps:

Step #1 – Try Windows Azure: No cost. No obligation. 90-Day FREE trial.
Step #2 – Download the Tools for Windows 8 App Development
Step #3 – Start building your own Apps for Windows 8

imageSubscribe to our podcast via iTunes or RSS

If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:

Register for our Windows Azure Hands-on Lab Online (HOLO) events today!

Blogs:

Videos:

Virtual Labs:

Download

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

• Tim Huckaby (@TimHuckaby) asserted “Microsoft Research's new technology advances scientific research in meaningful ways” in a deck for his Infer.NET: The First Ever Patent for Humanity Award in Technology article of 4/18/2013 for DevPro:

imageMicrosoft is still the leader in R&D spending. No other technology company spends more. Microsoft Research is as much of a leader as it was when it was founded over 25 years ago. If you're into the bleeding edge of technology like I am, then you know that a visit to the Microsoft Research website is 10 times more than the time sink that Facebook is. I can spend hours on the Microsoft Research website at a concentration level that overcomes the state of "totally connected." Working for Microsoft Research seems like the perfect job. [See below.]

Infer.NET: A Development Framework for Machine Learning

imageInfer.NET is one technology that's made it out of Microsoft Research that's pretty exciting. Microsoft Research has been working on Infer.NET for over 10 years. That's a huge investment on a technology that will see little or no return on investment in terms of dollars. In terms of its ROI on humanitarian issues in science, medicine, and environment, I have not seen an equal in a long time. Microsoft has stated that "the power of Infer.NET is that it can accelerate our understanding of complex problems, such as those commonly found in health, biology and the environment, and allows Microsoft Research and scientists across the world to advance towards solutions even faster."

Infer.NET is a development framework that makes it much easier to apply advanced machine learning techniques to solve difficult problems. Infer.NET is a .NET library for machine learning. Microsoft has stated that "it provides state-of-the-art algorithms for probabilistic inference from data. Various Bayesian models such as Bayes Point Machine classifiers, TrueSkill matchmaking, hidden Markov models, and Bayesian networks can be implemented using Infer.NET."

Now I don't have a background in math or science, and I barely understand half of that statement at a high level. And if you're an application programmer, then I'm guessing you don't either. And it doesn't matter. What does matter is that scientists, mathematicians, technologists, and the like will use Infer.NET to fight the good fight by solving real-life problems.

The power of Infer.NET is that it can accelerate the understanding of complex problems, such as those commonly found in health, biology, and the environment. This .NET library paves the way for scientists across the world to advance towards solutions in these areas even faster.

Infer.NET is a technology that has not only made it out of Microsoft Research, but was also just awarded a significant patent. On April 11, Infer.NET was recognized with the United States Patent and Trademark Office's first ever Patents for Humanity Award in the information technology category.

Use Cases with Infer.NET

A technology is always best understood by its use cases. Here are the three most publicized projects that utilize Infer.NET:

  • An asthma study that investigates the early indicators of severe asthma in children and tries to shed light on the environmental and genetic causes of asthma
  • An analysis of key parts of an individual's DNA sequence to shed new light on how variations in our genetic makeup can make us susceptible to different diseases
  • An examination of how key drivers of forest dynamics, such as the growth and mortality rates of different sized trees in different kinds of forests, vary across geographic space and from year to year, to improve understanding of the effects of climate change.
Summing It Up: The Good News

By the way, Microsoft provides Infer.NET free of charge for non-commercial purposes, such as scientific or medical research. Just as you'd expect. Microsoft is able to make investments like Infer.NET free of charge for non-commercial uses because Microsoft can commercialize investments in R&D in many other places. Be sure to visit Microsoft's Infer.NET website to download and learn about more about this exciting technology.

I could write this monthly column just on the cool stuff that's going on in Microsoft Research. That just might be a great idea. And think about this: In this NDA world we live in, imagine what's going on in Microsoft Research that's not public or we just don't know about yet.

Related: "Kinect for Windows SDK 1.7 Released, Now Includes Kinect Interactions & Fusion Features"

The Infer.NET website reports that the Infer.NET team is “now hiring for the Infer.NET project.


• See The San Francisco Bay Area Azure Developers group will host a Azure Cloud Numerics & F#: How I Learned to Stop Worrying and Love Big Data presentation article in the Cloud Computing Events section below.

image_thumb8No significant OData articles today


<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

•• Vittorio Bertocci (@vibronet) described The Windows Azure AD Application Model in a 4/17/2013 post:

imageIn the various announcements and walkthroughs you had the chance to experience the changes in Windows Azure AD’s product surface. In this (hopefully not too long, it’s midnight already) post I am going to touch on some of the deeper changes that took place beneath the surface. Albeit less evident at first, some of those have far-reaching consequences you should be aware of – especially if you plan to write multitenant applications.

Applications vs. ServicePrincipal

imageRemember  the mega post (oh pioneers!) I wrote when we released the first Web SSO preview, or any of the presentation recordings since then?
One of the key concepts introduced at the time was the idea of ServicePrincipal: an entry in your Windows Azure AD tenant, much like the traditional User Principals, used to describe applications. At the time you just had PowerShell cmdlets to provision applications, and there was no way for you to avoid operating at the ServicePrincipal level.

ServicePrincipals are still the way in which applications are concretely provisioned in a directory. For example, it is the set of AD roles an application’s ServicePrincipal belongs to that determines what the application can do in term of directory access (SSO only, SSO+read access, etc.).
That said: with the GA release, Windows Azure AD introduced a further abstraction level which decouples high level application definition operations from the low-level provisioning of ServicePrincipals.

When you go through an application registration flow in the Windows Azure portal, such as the one described here, you are really doing the moral equivalent of two distinct operations in one.

  • You are creating an object of type Application, which describes the main application coordinates (such as URL to use for Web SSO, app id URI to identify the app, client ID and key to be used in OAuth2 flows for invoking the Graph, etc.) and come config settings (such as if the app is single tenant or available for other tenants via consent flows)
  • In the same process, the portal is using that Application object as a blueprint for creating a new ServicePrincipal in your tenant. Such ServicePrincipal will have the same coordinates (URLs, URIs, IDs, keys) as the corresponding Application and will also have the access level (SSO, SSO+read, SSO+read+write) you established at creation

The advantage of that will become clear in a moment; before delving into that, let’s stop for a moment and see the above in practice.

Head to your own portal and create an test application. Below you can see what I used for mine:

image

If I take a look at the entities in the directory after I’ve done that, I’ll find in the applications collection the following new entry:

    {
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Application",
"objectType": "Application",
"objectId": "b4d66176-4654-4b7d-892e-d3b564bc7910",
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"availableToOtherTenants": false,
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"identifierUris": [
"https://cloudidentity.net/testapp1"
],
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publicClient": null,
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null
}

Pretty interesting stuff. I won’t (yet) go into the details of everything you see there, but for the time being: please note that it closely mirrors the coordinates provided in the portal.

Now, let’s take a peek at the ServicePrincipals collection.  Among the many built-in principals you’ll find the following entry:

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
"objectType": "ServicePrincipal",
"objectId": "a3518a24-2017-4c63-89d3-adec4eeaa9ad",
"accountEnabled": true,
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publisherName": "Vittorio.Bertocci",
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null,
"servicePrincipalNames": [
"https://cloudidentity.net/testapp1",
"7042160d-2d58-4528-878f-b05b0edc799e"
],
"tags": [
"WindowsAzureActiveDirectoryIntegratedApp"
]
}

Yep, that’s a SP with the same coordinates and some extra info. For example, there is a field “publisher” which corresponds to the name of my directory tenant (confusingly named as myself, sorry about that, details here).
There’s more! Let’s take a look at the roles this SP belongs to: it’s easy, you just GET the following: https://graph.windows.net/cloudidentity.net/servicePrincipals/a3518a24-2017-4c63-89d3-adec4eeaa9ad/memberOf where the GUID is the ObjectID. The result:

{
"odata.metadata": "https://graph.windows.net/cloudidentity.net/$metadata#directoryObjects",
"value": [
{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Role",
"objectType": "Role",
"objectId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b",
"description": "Allows access to various read only tasks in the directory. ",
"displayName": "Directory Readers",
"isSystem": true,
"roleDisabled": false
}
]
}

Yes, this SP has the access level we specified at apps’s creation: that means that the access level to the directory is not an intrinsic property of the application. Want circumstantial evidence? Let’s see what we get if we try to see if the corresponding Application belongs to anything, by GETting https://graph.windows.net/cloudidentity.net/applications/b4d66176-4654-4b7d-892e-d3b564bc7910/memberOf.

Surprise!

{
"Status Code" : "BadRequest",
"Description" : "The remote server returned an error: (400) Bad Request.",
"Response" : "{"odata.error":{"code":"Request_BadRequest","message":

{"lang":"en",

"value":"Unsupported directory object class 'Application' in query against link 'memberOf'."}}}"
}

The entity type does not even support memberOf. I stand my case.

OK, now let’s make things more interesting. Let’s go back to the portal and make the app available to other tenants. In my case I already picked the URI in the right format (see this) hence all I need to do is flipping the “External access” switch in the config page for the application.

image

Let’s take a look again at the Application object:

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Application",
"objectType": "Application",
"objectId": "b4d66176-4654-4b7d-892e-d3b564bc7910",
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"availableToOtherTenants": true,
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"identifierUris": [
"https://cloudidentity.net/testapp1"
],
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publicClient": null,
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null
}

Yes, AvailableToOtherTenants is now set to true.
The corresponding SP? No changes.

Application, ServicePrincipal and Consent Operations

The introduction of the Application object was largely made for facilitating things in multitenant scenarios. Let’s take a look at that in some details.

I am going to navigate to the app’s consent page and sign in as another tenant’s admin; and I am going to consent to give the app to access the new admin’s directory. At that point, we’ll take a look at what happened in the directory itself.

I’ll start by grabbing the consent URL from the portal and edit it to request a different access level (DirectoryWriters) just to stress my point that it’s independent from the app itself. I’ll ignore the return URL part, we don’t need to write a single line of code for messing with the app settings. The result:

https://go.microsoft.com/fwLink/?LinkID=286623&clcid=0×409&ClientID=7042160d-2d58-4528-878f-b05b0edc799e&RequestedPermissions=DirectoryWriters&ConsentReturnURL=https%3A%2F%2Flocalhost%3A2121%2F

Let’s open another IE in private mode (or another browser type) and paste the consent URL. Then, let’s sign in as the admin of another Windows Azure AD tenant (in my case, treyresearch1.onmicrosoft.com).

image

Note the access levels.

Click Grant. Of course you’ll be bounced to nowhere, given that the reply URL is bogus, but at that point the consent has been already registered.

Want proof? In the same inPrivate browser navigate to the Windows Azure portal, AD tab, integrated apps: the test app will be there.

image

As you can see, it’s easy to tell that app apart from the ones developed directly in the tenant. That gets even more evident if you click on its entry:

image

Note that the access level that “my test app” has in treyresearch1.onmicrosoft.com is different than the one it had in the tenant it was created in, cloudidentity.net.

OK, let’s take a peek under the hood.

First interesting fact: If I query treyresearch1’s applications entities I do not find an entry for “my test app”.

Do you want to take a guess about if I’ll find something in the serviceprincipals? You got it, it’s there!

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
"objectType": "ServicePrincipal",
"objectId": "6629ad6c-891c-4b2c-97d3-63ec3c6b2579",
"accountEnabled": true,
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publisherName": "Vittorio.Bertocci",
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null,
"servicePrincipalNames": [
"https://cloudidentity.net/testapp1",
"7042160d-2d58-4528-878f-b05b0edc799e"
],
"tags": [
"WindowsAzureActiveDirectoryIntegratedApp"
]
}

Let’s paste again the one we got in cloudidentity.net to spot the differences:

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
"objectType": "ServicePrincipal",
"objectId": "2be215fa-6970-4ca7-bb1f-2ca304196655",
"accountEnabled": true,
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publisherName": "Vittorio.Bertocci",
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null,
"servicePrincipalNames": [
"https://cloudidentity.net/testapp1",
"7042160d-2d58-4528-878f-b05b0edc799e"
],
"tags": [
"WindowsAzureActiveDirectoryIntegratedApp"
]
}

The only difference is the ObjectId, which of course must be globally unique for every object everywhere. Apart from that, and the different role memberships determined at creation/consent time, the two are identical projections from the original Application’s entry back in the cloudidentity.net tenant.

Things to Note

Here there are few things you want to keep an eye on.

Multitenant Apps Are Tied to Their Origin’s Tenant

The Application object is the enabler of the consent flow magic. Also, the Application object lives in the tenant in which the app was originally created.

That means that the app destiny is tied to the tenant’s, hence you might want to take that into account when taking lifecycle decisions (migrating to new tenants and similar).

Once a ServicePrincipal is Created Via Consent, Ties to the Original Application Object Are Severed

Say that you want to change something in your app, like the app URL. Let’s actually do it and see what happens. Head back to the windows azure portal of the original tenant, and modify the app’s reply URL.

image

Let’s take a look at the Application object.

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Application",
"objectType": "Application",
"objectId": "b4d66176-4654-4b7d-892e-d3b564bc7910",
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"availableToOtherTenants": true,
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"identifierUris": [
"https://cloudidentity.net/testapp1"
],
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publicClient": null,
"replyUrls": [
"https://localhost:2121/tornaacasalasssie"
],
"samlMetadataUrl": null
}

Yep, updated.

What about the ServicePrincipal, still in this tenant?

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
"objectType": "ServicePrincipal",
"objectId": "83ac2b6e-d20f-4770-874d-4e0859579476",
"accountEnabled": true,
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publisherName": "Vittorio.Bertocci",
"replyUrls": [
"https://localhost:2121/tornaacasalasssie"
],
"samlMetadataUrl": null,
"servicePrincipalNames": [
"https://cloudidentity.net/testapp1",
"7042160d-2d58-4528-878f-b05b0edc799e"
],
"tags": [
"WindowsAzureActiveDirectoryIntegratedApp"
]
}

Updated as well, as expected.

What about the ServicePrincipal provisioned via consent flow in the tenant treyresearch1.onmicrosoft.com?

{
"odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
"objectType": "ServicePrincipal",
"objectId": "6629ad6c-891c-4b2c-97d3-63ec3c6b2579",
"accountEnabled": true,
"appId": "7042160d-2d58-4528-878f-b05b0edc799e",
"displayName": "My test app",
"errorUrl": null,
"homepage": "https://localhost:2121/",
"keyCredentials": [],
"logoutUrl": null,
"passwordCredentials": [],
"publisherName": "Vittorio.Bertocci",
"replyUrls": [
"https://localhost:2121/"
],
"samlMetadataUrl": null,
"servicePrincipalNames": [
"https://cloudidentity.net/testapp1",
"7042160d-2d58-4528-878f-b05b0edc799e"
],
"tags": [
"WindowsAzureActiveDirectoryIntegratedApp"
]
}

Yes, the ServicePrincipal in the other tenant is unaffected by the change in the Application object in the original tenant.

The consent operation uses the state of the Application object at the consent instant and creates a new ServicePrincipal out of it, but after that there is no further synchronization involved. Excluding the programmatic routes, the main way of applying the new settings is to revoke the consent to the app and grant it again.
My interpretation here (remember my disclaimer!!! this is my personal blog!) is that in the power balance between tenant administrator and ISV the directory favors the admin by default. The tenant admin is the king of his own castle, and changes are always under his/her explicit imprimatur: that includes app lifecycle changes such as this one.

This is one of the reasons for which it is super-important for you to avoid exposing unnecessary implementation details when defining the application coordinates. A classic example would be to specify a special page/action in the reply URL instead of referring to the app’s root URL; any changes in that, something that would normally be an implementation detail private to your app, would cause you unnecessary churn.

Bottom line:  Before you promote an app to be externally available, it is good practice to ensure that its protocol coordinates are stable; furthermore, it is very important for you to ensure that you have a way of contacting your customers should you need to apply emergency changes. The consent flow is extremely powerful, and allows you to onboard organizational customers with unprecedented ease, but it remains a business critical feature and as such requires thorough planning.


Manu Cohen-Yashar (@ManuKahn) described how to use Client Certificates in Windows Azure in a 4/7/2013 post (missed when published):

imageA simple method to authenticate customers is by using client certificates. Smart card and enterprise customers are just two basic scenarios.

Lets describe how to implement client certificate authentication in a simple Web API service deployed in Windows Azure.

image_thumb75_thumb3The first thing we need to do is to establish an SSL channel. Client certificates can only be attached to a SSL request. To do that we need to create an SSL certificate and sign it by a trusted CA (Certificate Authority). We can create a certificate request using IIS and send it to the CA. (see figure)

image

or create a self signed CA, install it to our trusted certificate store and then use it to create our SSL certificates.

Lets create a CA certificate:

@echo off
echo delete old CA certificate
certutil -delstore root "My CA"del MyCA.*
echo create My CA certificate
makecert -r -pe -n "CN=My CA" -ss CA -a sha1 -sky signature -cy authority -sv myCA.pvk myCA.cer
pvk2pfx -pvk myCA.pvk -spc myCA.cer -pfx myCA.pfx -po password echo install My CA certificatecertutil.exe -addstore root myCA.cer

Now its time to create a new SSL certificate using our CA:

Echo off
del server.*
Echo Create SSL certificate
makecert -pe -n "CN=server" -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1
  -ic myCA.cer -iv myCA.pvk –sp  "Microsoft RSA SChannel Cryptographic Provider"
  -sy 12 -sv server.pvk server.cer
pvk2pfx -pvk server.pvk -spc server.cer -pfx server.pfx -po 123456

The final step is to create a client certificate using our CA. This certificate will be used by clients to authenticate.

echo off
echo delete existing Client certificate
del ClientCert.*certutil -delstore my "ClientCert"
echo create Client certificate
makecert -pe -n "CN=ClientCert" -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.2
  -ic myCA.cer -iv myCA.pvk -sv ClientCert.pvk ClientCert.cerpvk2pfx
  -pvk ClientCert.pvk -spc ClientCert.cer -pfx ClientCert.pfx -po password
certutil -addstore -user my ClientCert.cer

Now we need to configure IIS to accept our client certificates. By default IIS will ignore incoming client certificates and the certificates will not be accessible in our code.

image

This is very simple on a on-premises server yet in Azure it can be quite tricky. As Azure developers we are used to configure our machine using start up task. The problem is that SSL Settings configurations are enforced on existing web sites. When startup tasks are running the web site of our application was not yet created so the tasks will fail. The solution is to configure IIS in code. Fortunately the NuGet package “Microsoft.Web.Administration” provides all the API we need.

using (var serverManager = new ServerManager())
{
   try
   {
       var siteName = RoleEnvironment.CurrentRoleInstance.Id + "_Web";
       var config = serverManager.GetApplicationHostConfiguration();
       var accessSection = config.GetSection("system.webServer/security/access", siteName);
       accessSection["sslFlags"] = @"SslNegotiateCert";

       serverManager.CommitChanges();
   }
   catch (Exception ex)
   {
        ...
   }
}

Calling this code from our role’s OnStart method will do the job as long as we run in elevated execution:
<Runtime executionContext="elevated">

Now we need to upload both CA certificate (to trusted certificate authorities) and our SSL certificate (to the personal store) and configure an SSL endpoint

image

image

The certificates must be uploaded independently to the hosted service using the portal or the management API.

Now we can create a delegation handler that will authenticate all incoming requests and plug it into the ASP.NET WEB API pipeline.

public class CertificateAuthHandler : DelegatingHandler
{
  public CertificateAuthHandler()
  protected override System.Threading.Tasks.Task<HttpResponseMessage>
            SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
  {
      X509Certificate2 certificate = request.GetClientCertificate();
      if (certificate == null || !CertificateValidator.IsValid(certificate))
      {
         Logger.Warn("No certificate was found or it is invalid");
         return Task<HttpResponseMessage>.Factory.StartNew(
                    () => request.CreateResponse(HttpStatusCode.Unauthorized));

      }
       Thread.CurrentPrincipal = CertificateValidator.GetPrincipal(certificate);
            return base.SendAsync(request, cancellationToken);
  }
}

to plug the delegation handler let us update WebApiConfig.cs

public static class WebApiConfig
{
   public static void Register(HttpConfiguration config)
   {
      config.Routes.MapHttpRoute(
         name: "DefaultApi",
         routeTemplate: "api/{controller}/{id}",
         defaults: new { id = RouteParameter.Optional },
         constraints: null );

       config.EnableQuerySupport();
       config.MessageHandlers.Add(new CertificateAuthHandler());

    }
}

That’s it.

Now we are ready to call our service on an SSL REST endpoint which is protected by a client certificate.

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

The Windows Azure Team (@WindowsAzure) reported “ACTION REQUIRED: Windows Azure VM Role says goodbye” in a 4/18/2013 5:00 PM PDT email:

Dear Customer,

imageThe Windows Azure VM Role preview is being retired on May 15. Please transition to Windows Azure Virtual Machines, and delete your VM Role preview instances as soon as possible.

imageThank you for participating in the preview program for the Windows Azure VM Role. Since we started the preview program, we have learned a lot about your needs and use cases. Your feedback and insights have helped fine-tune our approach to infrastructure services. We’ve directed all of that feedback into the design of Windows Azure Virtual Machines, the evolution of VM Role.

imageOn April 16, 2013, we announced the general availability of Windows Azure Virtual Machines. Virtual Machines provides on-demand scalable compute resources to meet your growing business needs and can extend your existing infrastructure and apps into the cloud. With the general availability of Windows Azure Virtual Machines we are retiring the VM Role preview.

ACTION REQUIRED
Please migrate from VM Role to Virtual Machines, and delete your running instances of VM Role as soon as possible. You can these follow these instructions to migrate to Virtual Machines.

Here are important dates to note:

  • Starting May 15, 2013, calls to create new VM Role deployments will fail.
  • On May 31, 2013, all running VM Role instances will be deleted.

Please note that you will continue to be billed for your VM Role consumption until your running instances are deleted.

Thank you for participating in the VM Role preview program and shaping the future of Windows Azure Virtual machines! You can find more information on Windows Azure Virtual Machines here.

Thank you,
Windows Azure Team


Scott Guthrie (@scottgu) posted Windows Azure: General Availability of Infrastructure as a Service (IaaS) to his ASP.NET blog on 4/16/2013:

imageThis morning we announced the general availability release of our Infrastructure as a Service (IaaS) support for Windows Azure – including our new Virtual Machine and Virtual Network capabilities.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production apps.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using it today.

image_thumb75_thumb4In addition to supporting all of the features and capabilities included during the preview, today’s IaaS release also includes some great new enhancements:

  • New VM Image Templates (including SQL Server, BizTalk Server, and SharePoint images)
  • New VM Sizes (including Larger Memory Machines)
  • New VM Prices (we’ve reduced prices 21%-33% for IaaS and PaaS VMs)

imageBelow are more details on today’s release and some of the new enhancements.  You can also read Bill Hilf’s blog post to learn about some of the customers who are already using the IaaS capabilities in production.

Windows Azure Virtual Machines

Windows Azure Virtual Machines enable you to deploy and run durable VMs in the cloud.  You can easily create these VMs from an Image Gallery of pre-populated templates built-into the Windows Azure Management Portal, or alternatively upload and run your own custom-built VHD images.  Our built-in image gallery of VM templates includes both Windows Server images (including Windows Server 2012, Windows Server 2008 R2, SQL Server, BizTalk Server and SharePoint Server) as well as Linux images (including Ubuntu, CentOS, and SUSE Linux distributions).

Windows Azure uses the same Hyper-V virtualization service built-into Windows Server 2012, which means that you can create and use a common set of VHDs across your on-premises and cloud environments.  No conversion process is required as you move these VHDs into or out of Windows Azure – your VMs can be copied up and run as-is in the cloud, and the VMs you create in Windows Azure can also be downloaded and run as-is on your on-premise Windows 2012 Servers.  This provides tremendous flexibility, and enables you to easily build hybrid solutions that span both cloud and on-premises environments.

Easy to Get Started

You can quickly create a new VM in only a few seconds using the Windows Azure Management Portal.  Just click the New command on the bottom left of the portal, and then use the Virtual Machine->Quick Create option to instantiate a new Virtual Machine anywhere in the world (if you want to do this via the command line you can also download our command-line-tools for Windows-based PowerShell users or for Linux/Mac users).

image

Once you create a new VM instance you can easily Remote PowerShell, SSH, or Terminal Server into it in order to customize the VM however you want (and optionally capture your own custom image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.

Integrated Management and Monitoring

In addition to enabling you to create VMs, the Windows Azure Management Portal also provides built-in management and monitoring support of them once they are running:

image

Durable Data Disks

Virtual Machines in Windows Azure can optionally attach and use data disks for storage (each disk can be up to 1 TB in size):

image

Once attached, these disks look like standard disks/devices to a Virtual Machine, and you can format them using whatever disk format you want (e.g. NTFS for Windows, ext3 or ext4 for Linux, etc).  The disks are both persistent and highly durable, and are implemented on top of Windows Azure Blob Storage (which ensures that each drive is maintained in triplicate for high availability).

Built-in Load Balancer Support

Virtual Machines in Windows Azure can also optionally utilize a network load-balancer (LB) at no extra charge – enabling you to distribute traffic sent to a single IP address/port to multiple VM machine instances.  You can use the load balancer to both both scale out your apps, as well as provide better fault tolerance when a VM is down or you are performing maintenance on it.  The load balancer can automatically remove the machine from rotation when this happens:

image

Setting up load-balancing across VMs is easy – just click the the Endpoints tab within a VM in the Windows Azure Management Portal and then choose to add an endpoint to the VM (for the first VM you want to add), and then select “load-balance traffic on an existing endpoint” for the subsequent VM instances:

image

You can find more details on how to configure a set of load-balanced VMs in this common task on load-balanced sets.

Windows Azure Virtual Networks

Along with the general availability of Windows Azure Virtual Machines, we are also today announcing the general availability of Windows Azure Virtual Networks.  Windows Azure Virtual Networks enable you to accomplish the following tasks:

  • Create a virtual private network with persistent private IPs: You can bring your preferred private IPv4 space (10.x, 172.x, 192.x) to Windows Azure using a Virtual Network. Furthermore, Virtual Machines within a Virtual Network will have a stable private IP address, even across hardware failures.
  • Cross-premises connectivity over site-to-site IPsec VPNs: You can extend your on-premises network to Windows Azure and treat Virtual Machines in Windows Azure as a part of your organization’s existing network using a Virtual Network gateway to broker the IPSec connection. We support standard VPN hardware devices from Cisco and Juniper to enable this.
  • Configure custom DNS servers: Using a Virtual Network, you can point your Virtual Machines to a DNS server on-premises or a DNS server running in Windows Azure on the same Virtual Network. This also enables running a Windows Server Active Directory domain controller on Windows Azure.

  • Extended trust and security boundary: Deploying Virtual Machines into a Virtual Network will extend the trust boundary to that Virtual Network. You can create several Virtual Machines and Cloud Services within a single Virtual Network and have them communicate using the private address space. This allows simple communication between different Virtual Machines or even Virtual Machines and web/worker roles in separate Cloud Services, without having to go through a public IP address. Furthermore, Virtual Machines outside the Virtual Network have no way to identify or connect to services hosted within Virtual Network, providing an added layer of isolation to your services.

Creating a Virtual Network

Creating a virtual network in Windows Azure is easy, just click the New command on the bottom left of the portal, and then use the Networks>Virtual Network->Quick Create (or Custom Create) option to instantiate a new Virtual Network:

image

Virtual Networks can be created and used in Windows Azure for free. The only thing we charge extra for is if you enable the VPN gateway support – at which point we charge a per hour + bandwidth usage fee.  You can find more information on Virtual Network and how it complements our Virtual Machine offering here.

New VM Image Templates (including SQL Server, BizTalk, and SharePoint images)

Today’s Windows Azure release includes several new VM image templates that you can use to easily create and run new Virtual Machines.  These include several new SQL Server 2012 images (including standard and enterprise edition templates), new BizTalk Server 2013 images (including Evaluation, Standard and Enterprise editions), and a new SharePoint Server 2013 image:

image

Hourly Billing Support

In addition to making it easier and faster to get started, these SQL Server and BizTalk Server images also enable an hourly billing model which means you don’t have to pay for an upfront license of these server products – instead you can deploy the images and pay an additional hourly rate above the standard OS rate for the hours you run the software.  This provides a very flexible way to get started with no upfront costs (instead you pay only for what you use).  You can learn more about the hourly rates here.

More Details on SQL Server, BizTalk and SharePoint Server

More details on deploying SQL Server in Windows Azure Virtual Machines can be found here and details on BizTalk Server can be found here

This week we are also releasing a SharePoint deployment guide as well as PowerShell Scripts that make it easy to get started with SharePoint on Windows Azure – and to enable the automation of a complete SharePoint farm.  Once deployed, you can also administer SharePoint 2013 directly using PowerShell.

New VM Sizes (including Larger Memory Options)

With today’s Windows Azure release we are also adding two new VM size options to the existing 5 VM sizes we supported during the public preview.  These two new VM sizes include a new 4 core x 28GB RAM configuration as well a 8 core x 56GB RAM configuration.  You can now select these options when you create a new VM:

image

These new VM sizes enable you to run even larger workloads with Windows Azure.  More details on the different sizes and their capabilities can be found here.

New VM Prices (including a price drop of 21% to 33%)

With today’s Windows Azure release we are also announcing significant price reductions to our Windows Azure compute options.  This new pricing delivers a 21% price reduction from the previously announced pricing of Windows Azure Virtual Machines (IaaS), and a 33% price reduction for solutions deployed using our Windows Azure Cloud Services (PaaS) model.  Our new VM pricing also matches Amazon’s on-demand VM pricing for both Windows and Linux VMs.

New Windows Azure Virtual Machine Compute Pricing

Below are the new hourly on-demand rates for Windows Azure Virtual Machines:

image

Note that the above prices are for hourly on-demand usage (meaning there is no commitment to use them for more than an hour and you pay only for what you consume).  Complete pricing details for Windows Azure Virtual Machines can be found here.

Commitment Pricing Discounts

You can also optionally take advantage of our 6 Month and 12 Month commitment plans to obtain significant discounts on the standard pay as you go rates.  With a commitment plan you commit to spend a certain amount of money each month and in return we give you a discount on any Windows Azure resource you use that money on (and the more money you commit to use the bigger the discount we give).

One of the nice aspects of our Windows Azure commitment plans is that they don’t lock you into having to specify upfront the number of VMs or specific VM sizes you want to use (or which regions or availability zones you want to use them in).  Instead you simply commit to spend a certain amount of money each month and we’ll give you a discount on any Windows Azure resource you use that money on.  This provides you with the flexibility to change your VM deployment sizes dynamically without having to worry about being locked into a particular configuration, as well as the option to spend the commitment on both IaaS + PaaS based services (and take advantage of a discount on both).  You can learn more about our commitment pricing plans here.

Other Improvements

Today’s Windows Azure release also includes a number of other small VM enhancements including:

  • Increased default OS disk size: During the preview our default OS disk partition size was 30GB.  Based on customer feedback all of our new images now default to 127GB in size for the OS partition.
  • Ability to customize the Administrator username: We now enable you to customize the login name of the administrator account when creating VM images.  This enables you to avoid always having a well known username on your VMs (a good security best practice).
  • Remote PowerShell Enabled By Default: When deploying your Virtual Machine using PowerShell, we now enable remote PowerShell by default in all Windows Server OS images - including the SQL Server, BizTalk Server, and SharePoint images.  This makes it easier to automate setting up VMs without having to ever login interactively to a newly deployed instance.
Summary

We are really excited about today’s release – we know people have been looking forward to this release for awhile.  We’d like to say a special thanks to everyone who used it during the preview and gave us feedback on it. 

Today’s release now allows everyone to build better cloud solutions than ever before.  These solutions can now integrate IaaS and PaaS together, use both Windows and Linux based software together, and deliver value faster than ever before.  We are really looking forward to the solutions you build with it.

If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

The Windows Azure Team also confirmed an Updated SLA in a 4/16/2013 email message:

When you deploy multiple instances of Virtual Machines, Microsoft provides a financially backed 99.95 percent monthly service level agreement (SLA).


The Windows Azure Team (@WindowsAzure) updated the Infrastructure Service scenario documentation on 4/16/2013:

Scalable Infrastructure in the Cloud

Bring It. We Run It: Scalable, On-demand Infrastructure for Your Apps

imageDramatically reduce your wait time to provision IT resources by rolling out apps and infrastructure in minutes. Bring your Windows or Linux-based application to the cloud as-is. Scale up or scale down as needed for a wide range of app hosting scenarios and pay only for what you use.

More in this video.

Extend and Synch: Connect Hybrid Infrastructure Services with a Single Identity

Build hybrid services that take advantage of what you already have while enabling new innovation in the cloud. Bring your existing identities to apps running in Virtual Machines by simply connecting to your on-premises Active Directory. Running Office 365? Simply run Active Directory Federation services in Virtual Machines to sync with on-premises identities for single sign on.

Build. Learn. Test: Rapid Innovation Using Infrastructure Services for Dev & Test

Spin up a test lab within minutes. Connect to your existing infrastructure if required. When you’re done, tear it down, bring your app back in house to run it using your on-premises infrastructure, or keep it in the cloud. The choice is yours.

More in this video.

Customize. Collaborate. Maintain: SharePoint on Windows Azure Infrastructure Services.

Spin up SharePoint farms in minutes without major capital investments. Integrate full trust code to run rich apps and business logic, and provide internet facing collaboration sites on SharePoint that scale with your business needs.

More in this video.

Develop. Scale. Unlock: Robust Infrastructure for SQL Server

Start small, go big. Whether you are building a lab to prototype your newest app with SQL Server or extending data marts into the cloud, Windows Azure Virtual Machines is a solid foundation you can count on. With full SQL Server compatibility you get capabilities like full-text search, or transparent data encryption for greater security.

More in this video.

image_thumb11


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team (@WindowsAzure) posted a 00:04:52 Scaling a SharePoint 2013 farm on Windows Azure Infrastructure Services video on 4/16/2013:

image_thumb75_thumb5Watch this short screencast to learn how to host a SharePoint 2013 farm in Windows Azure. We will walk you over hosting a public internet site and taking advantage of Windows Azure's built in load balancing capabilities. You will also learn how you can easily expand a web farm using a combination of base images and Windows PowerShell when you need handle increased demand and traffic.

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Matt Evans described Using the LightSwitch ServerApplicationContext API in a 4/15/2013 post:

imageThe ServerApplicationContext API is a new feature in LightSwitch, available with Visual Studio 2012 Update 2 or later, which allows you to create entirely new ways to call custom business logic on the LightSwitch Server, using the same rich API you're used to working with on the server tier. We previewed this API in the VS 2012 HTML Client Preview 2 release, but we've made a few tweaks since then, and this article discusses the API in a bit more depth.

Background

By default, the only way a client or service can communicate with the LightSwitch server is via the OData protocol, and only to EntitySets and Queries you've created in the Query designer. See LightSwitch as a Data Source.

We recently introduced a feature in the LightSwitch server which allows developers to create alternative entry points on the LightSwitch middle tier (a.k.a. the server). This is very handy if you want the server to communicate with clients that don't understand OData, or you need to return data that isn't shaped like one of your Entities. For instance, if you want to invoke some custom logic on the server, the solution until now has been the "command table" pattern, where you create an entity which is just a conduit for sending work requests to the server. Another common request we get is for a way to generate reports and interactive dashboards. Reports usually aren't shaped like whole entities, but rather projections of entities and aggregates. To solve this sort of reporting problem, people have had to resort to cumbersome mechanisms like custom RIA services in order to be able to transfer non-entity data out of the server.

Eventually it would be nice to have a great inbox experience for reporting and for service operations. However, in the interim, we have introduced the ability for developers to use the LightSwitch API inside of their own web service endpoints. You can add any of the normal ASP.NET web assets to your Server project and create new ways of interacting with the LightSwitch server. You could create something quick and dirty like an ASP.NET Web Form, or something more powerful like a WCF Service or a Web API endpoint. The key point is that you decide the appropriate way you'd like to expose a new service, using the normal Visual Studio gestures for adding and working with those assets.

Technically, it has always been possible to add aspx pages and Web API calls to the LightSwitch server, but there was no easy way for you to use the LightSwitch API inside your custom entry points, so it wasn't a great experience.

With the ServerApplicationContext API, we've made certain scenarios much easier, and we're opening things up to your imagination.

A quick and dirty example

1. Create a new LightSwitch HTML / C# Application
2. Add a new Table, called "Customer". Give it two properties, "Name" and "BirthDate"
3. Add a new Browse Screen for the Customer Table
4. In Solution Explorer, change to "File View"

image

5. Select the Server project in Solution Explorer
6. Right click on the Server Project, choose "Add" and then choose "New Item"

image

7. Search for "web form"
8. Add a new asp.net web form named "MakeData.aspx"

image

9. Expand MakeData.aspx in the Solution explorer. Double click on the code-behind file (MakeData.aspx.cs)
10. Paste the following code into the Page_Load method:
C# Code:
protected void Page_Load(object sender, EventArgs e)
{
using (ServerApplicationContext context = ServerApplicationContext.CreateContext())
{
Customer c = context.DataWorkspace.ApplicationData.Customers.AddNew();
        c.Name = "Good Guy Greg";
        c.BirthDate = DateTime.Today;
context.DataWorkspace.ApplicationData.SaveChanges();
}
}
VB Code – note that you'll want to add "Imports LightSwitchApplication" at the top of your VB code files
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Using Context As ServerApplicationContext = ServerApplicationContext.CreateContext()

        Dim c As Customer = Context.DataWorkspace.ApplicationData.Customers.AddNew() 
c.Name = "Good Guy Greg" 
c.BirthDate = Date.Today 
Context.DataWorkspace.ApplicationData.SaveChanges() 
     End Using
End Sub

11. Make another web form called "ShowData.aspx". Add a using statement for "Microsoft.LightSwitch".
12. Put the following code into the Page_Load method

C# Code:

protected void Page_Load(object sender, EventArgs e)
{
using (ServerApplicationContext context = ServerApplicationContext.CreateContext())
{
foreach (Customer c in context.DataWorkspace.ApplicationData.Customers)
{
Response.Write(c.Id + " " + c.Name + " " + c.BirthDate + "<br>\r\n");
}
    }
}

VB Code:

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Using Context As ServerApplicationContext = ServerApplicationContext.CreateContext()
        For Each c As Customer In Context.DataWorkspace.ApplicationData.Customers
            Response.Write(c.Id.ToString() + " " + c.Name + " " + c.BirthDate + "<br>" + vbCrLf)
        Next
End Using
End Sub

13. F5 your application
14. Note the URI of the app during F5; it should be something like http://localhost:12345/htmlclient/
15. Open a new browser tab, and manually type in the following uri: http://localhost:12345/MakeData.aspx Fix up the port number, as needed.
16. Open a new browser tab, and manually type in the following uri: http://localhost:12345/ShowData.aspx
17. Go back to the original tab, which shows the HTML client. Refresh this tab. You should see new data in your app.

If all went well, you should have seen your entity data in steps 15, formatted as uninteresting text, and again in step 16, inside the HTML client UI.

This example isn't especially interesting, but shows that if you know ASP.NET development, you can now build arbitrary pages and other types of endpoints that read and write to your LightSwitch data, using the LightSwitch API.

API Overview

There are actually two primary ServerApplicationContext classes. One of them is strongly typed for your application, that is, it understands which data sources, entities, and queries are in your specific LightSwitch project. This is what you'll use most of the time. It lets you write code like this:

C# Code:

using (ServerApplicationContext context = ServerApplicationContext.CreateContext())

    var v = from c in context.DataWorkspace.ApplicationData.Customers
            where c.Name.Contains("Matt")
select c;

foreach (Customer c in v) { 
Response.Write(c.Name + "<br>\r\n");
}
}

VB Code

Using Context As ServerApplicationContext = ServerApplicationContext.CreateContext()

    Dim v = From c In Context.DataWorkspace.ApplicationData.Customers
            Where c.Name.Contains("Matt")
Select c

For Each c As Customer In v
Response.Write(c.Name + "<br>" + vbCrLf) 
    Next
End Using

Note that because we are using the strongly typed model, our context has knowledge of the ApplicationData service and its Customers table, and all of the properties on the Customer entity.

Weakly Typed ServerApplicationContext

There is another ServerApplicationContext class which is weakly typed. It has no compile time knowledge of your entities or other project assets. However, it can access these items at runtime, using the Weakly Typed API. (Read more here). The weakly typed ServerApplicationContext is actually created by a call to the ServerApplicationContextFactory:

C# Code

using (IServerApplicationContext icontext = ServerApplicationContextFactory.CreateContext())
{
    var typ = icontext.DataWorkspace.SecurityData.GetAuthenticationType();
    if (typ.HasFlag(AuthenticationType.Windows) || typ.HasFlag(AuthenticationType.Forms))
{ i++;
}
}

VB Code – note you'll need to add "Imports Microsoft.LightSwitch.Server" and "Imports Microsoft.LightSwitch.Security" at the top of your source file..

Using icontext As IServerApplicationContext = ServerApplicationContextFactory.CreateContext()
Dim typ = icontext.DataWorkspace.SecurityData.GetAuthenticationType()
    If (typ.HasFlag(AuthenticationType.Windows) Or typ.HasFlag(AuthenticationType.Forms)) Then
        i = i + 1
End If
End Using

Suppose that you are writing some sort of extension module that enables generic reporting or import/export scenarios. You want to create a package that anyone can add to their LightSwitch application which will create new Web API endpoints that allow for exporting data as csv files. Because your module needs to work with any possible LightSwitch application, it has no strongly typed model to work with. However, it is straightforward to write weakly typed code which will enumerate the datasources, entitysets, key properties, and so on, allowing you to create a generic module that can operate in any LightSwitch application. The dynamic URL parsing and routing of Web API makes this an especially interesting scenario, e.g. suppose someone requests the following uri:

http://contoso.com/myLightSwitchApp/CsvExporter/Customers

It would be straightforward to write a generic module which implemented this CSV exporter as a Web API endpoint (CsvExporter), which would infer the EntitySet (Customers) to export based on the incoming URI.

As a side note, because the SecurityData data service is available in any LightSwitch application, that dataservice will be available strongly typed even on a weakly typed ServerApplicationContext, as seen in the example above.

Challenge: After you've completely read this article, take another look at the above example where I call ServerApplicationContextFactory.CreateContext. Will the variable i ever get incremented by this code? Why or why not?

Current vs. CreateContext

There are two items of interest on the ServerApplicationContext classes: the Current property and the CreateContext method. The Current property returns the currently in-scope ServerApplicationContext, if one exists. The CreateContext method creates a new ServerApplicationContext for your use.

Unlike DataWorkspaces, which can be "Stacked" so that many are simultaneously in scope, only one ServerApplicationContext can be present for a given logical request on the server. Each incoming request to one of the in-built LightSwitch server endpoints has its own ServerApplicationContext automatically created for the lifetime of the request, and which is used to service all activity for that request. If you were to try to create a second ServerApplicationContext when one was already present, you would get a ContextExistsException.

When you are creating your own entry points into the server, it is typically safe to simply call CreateContext without checking to ensure that Current is null first. This is because the normal LightSwitch server hasn't been called yet; IIS and your code are handling the HTTP request routing and LightSwitch hasn't had the opportunity to initialize anything on your behalf.

On the other hand, if you are inside of normal LightSwitch code on the server, like SaveChanges_Executing or Customer_Inserting, attempting to create a new ServerApplicationContext will always fail, because in these cases, the ServerApplicationContext that the LightSwitch save pipeline has created for its own use will already exist.

In almost all cases where you are defining the web entry point yourself (a webform, Web API, etc), to use a ServerApplicationContext you just do this:

C# Code

using (ServerApplicationContext context = ServerApplicationContext.CreateContext())

{

// my code goes here

}

VB Code:

Using Context As ServerApplicationContext = ServerApplicationContext.CreateContext()

' my code goes here

End Using

Authentication

Because the key usage scenarios for ServerApplicationContext involve creating new service endpoints on the LightSwitch server, by default, if your LightSwitch application is set to use authentication, ServerApplicationContext tries to enforce user authentication. Specifically, if in your web.config, the Authentication mode is Windows or Forms, when your code tries to create a new ServerApplicationContext by calling CreateContext, if there isn't a valid authenticated user already on the HttpContext, your call to CreateContext will throw an exception.

If you know what you are doing and do not want this behavior, you can tell CreateContext to skip the authentication check, by calling it in the following way:

C# Code

using (ServerApplicationContext context = ServerApplicationContext.CreateContext(ServerApplicationContextCreationOptions.SkipAuthentication))

VB Code

Using Context As ServerApplicationContext = ServerApplicationContext.CreateContext(ServerApplicationContextCreationOptions.SkipAuthentication)
' allow in unauthenticated users
' my code goes here
End Using

The SkipAuthentication flag tells CreateContext not to do the authentication check.

Next Steps

There are some more complete end to end examples that use ServerApplicationContext, which you can read about here:

Note that these were written in the HTML Preview 2 timeframe, and so the location of the ServerApplicationContext classes is a little bit different. We've got more blog posts planned with some good reporting examples so stay tuned!


Beth Massi (@bethmassi) uploaded Using LightSwitch ServerApplicationContext and WebAPI to Get User Permissions to the Visual Studio Samples Center on 4/17/2013:

imageIn this sample see how to return user permissions from the server using Web API. This allows you to retrieve an authenticated user's permissions to the HTML client to control UI elements on screens or return permissions to custom clients you build against the LightSwitch server.

Introduction

The LightSwitch philosophy is to provide RAD tools for building business apps fast, but still allow advanced customization where and when you need it. If you're new to LightSwitch, I encourage you to start on the LightSwitch Developer Center first.

This sample demonstrates how to return LightSwitch user permissions from the server using Web API. This allows you to retrieve an authenticated user's permissions to the HTML client in order to control UI elements (hide/unhide, enable/disable) on HTML screens. It can also be used to return permissions to custom clients you build against the LightSwitch server.

Walkthrough

For a detailed walkthrough please read:

Using LightSwitch ServerApplicationContext and WebAPI to Get User Permissions

Building the Sample

You will need Visual Studio 2012 Update 2 or later to run this sample. Extract the contents of the .ZIP, open the SLN in Visual Studio and press F5 to run the sample.

More Information

For more information please see:

And please ask questions in the LightSwitch forum and follow @VSLightSwitch on twitter.


Beth Massi (@bethmassi) posted LightSwitch Tip: A Simple Way to Check User Permissions from the HTML Client on 4/12/2013:

imageThose of you that have been working with LightSwitch know that we support a robust permissions system that allows developers to define certain permissions and then check them in code. LightSwitch provides numerous “CanExecute” hooks on entities and queries that can be used for checking permissions around data & query actions.

image_thumb6For instance, if you have defined a permission “CanAddCustomer” you can check if a user has this permission before allowing Inserts on the Customer entity on the server. First define the permissions on the Access Control tab of the project properties:

image

Then in the data designer, select the Server perspective and then drop down the “Write Code” button and select the Customers_CanInsert access control method:

image

Then you write code like this to allow or disallow the insertion of customers:

Private Sub Customers_CanInsert(ByRef result As Boolean)
    result = Me.Application.User.HasPermission(Permissions.CanAddCustomer)
End Sub

You always want to secure the server-side this way in order to protect the data in your system. However, sometimes we also want to use a permission check in the UI in order to hide/unhide (or enable/disable) elements on a screen.

In the Silverlight desktop client this is a very easy thing to do because we make use of portable assemblies that allows LightSwitch to share code between the client and the server side. You have a User object available to you at all times from any screen. In the HTML client this isn’t the case but all is not lost!

Define a Query

If we want to check permissions on the HTML client screens, the easiest thing to do is add a query and secure the query on the server-side. For example, add a query based on Customer called CanAddCustomer:

image

Then add the code in the CanAddCustomer_CanExecute method to check the permission:

Private Sub CanAddCustomer_CanExecute(ByRef result As Boolean)
    result = Me.Application.User.HasPermission(Permissions.CanAddCustomer)
End Sub

Because this will hit the database if a user does have permission, we can make the query as efficient as possible by not returning any actual results. Select the CanAddCustomer_PreprocessQuery method and write a query that won’t return results.

Private Sub CanAddCustomer_PreprocessQuery(
            ByRef query As System.Linq.IQueryable(Of LightSwitchApplication.Customer))

    query = From c In query Where 0 = 1

End Sub
Set Up the Screen

Now that we have our query we can add it to the screen in which we want to enable/disable UI elements based on this permission. On the screen designer click the “Add Data Item” button at the top and add the query to your screen:

image

Then select the control you want to enable/disable and note its name in the properties window, we’ll need this in code.

image

Add Some JavaScript Code

Lastly, select the Screen node in the designer and then drop down the “Write Code” button and add code to the “created” method.

image

myapp.BrowseCustomers.created = function (screen) {
    // Write code here.
    screen.getCanAddCustomer().then(function success() {
        screen.findContentItem("AddCustomer").isEnabled = true;
    }, function error() {
        screen.findContentItem("AddCustomer").isEnabled = false;
    });

};

The code calls the query on our screen and it will fail if the user doesn’t have permission to execute it, which will invoke the failure handler. Note that this could also hide the UI if the query failed for another reason, but this ensures the UI is only shown if the client can actually verify the user’s permissions.

Remember that hiding the elements in the client doesn't provide real security, so make sure to use the server-side access control methods shown above to ensure no client can access data you want to protect. 

On a personal note, Beth announced on Facebook on 4/7/2013 that she Got Engaged to Nick Hansen (scroll down). Congrats, Beth and Nick!

image_thumbNo significant Entity Framework articles today

 


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Dina Bass (@dinabass) reported Microsoft Pledges to Match Amazon Prices on Cloud Computing in a 4/16/2013 post to the Bloomberg BusinessWeek blog:

imageMicrosoft Corp. (MSFT), which is releasing new cloud-computing services today, said it will match Amazon.com Inc. (AMZN)’s lowest prices for competing products as it seeks to win business in a growing market.

imageMicrosoft is rolling out Windows Azure Infrastructure Services, which will let customers run existing applications on servers and storage machines in Microsoft data centers, after more than nine months of testing. The company also committed to match market leader Amazon on prices for computing and storage services, even if Amazon cuts rates, said Steven Martin, general manager of Windows Azure Business Strategy.

imageSeventy-one percent of respondents in a Forrester Research Inc. (FORR) survey said they used Amazon Web Services for cloud computing, compared with about 10 percent each for Microsoft, Google Inc. (GOOG) and other vendors. Revenue from public cloud services, which let customers run applications in an outside vendor’s data center and access them via the Internet, jumped 20 percent last year to $109 billion, Gartner Inc. estimates.

Redmond, Washington-based Microsoft, the world’s largest software maker, has more than 200,000 customers for its Azure cloud products, and 1,000 new ones sign up daily, Martin said in an interview. …

Read more.


• Michael Washam (@MWashamMS) described Windows Azure PowerShell Updates for IaaS GA in a 4/16/2013 post:

imageWith the release of Windows Azure Virtual Machines and Virtual Networks into general availability the Windows Azure PowerShell team has been working feverishly to provide an even more powerful automation experience for deploying virtual machines in the cloud.

Remote PowerShell on Windows Azure – Automating Virtual Machines

One of the key requests we have heard from customers is to go beyond the current capabilities of automated infrastructure provisioning and allow the user to bootstrap a virtual machine as part of a fully automated deployment.

With this release we are announcing that Remote PowerShell will be enabled by default on Windows based virtual machines created with the latest version of the Windows Azure PowerShell Cmdlets.

Enabling Remote PowerShell allows a user to create a virtual machine and on boot immediately launch a script to bootstrap whatever configuration is desired. This could be installing and configuring Windows Roles and Features all the way to downloading and deploying an application or website. Authentication is over SSL for security and you can use your own certificate or we can even generate one for you. In addition to the bootstrapping abilities Remote PowerShell allows you to write powerful scripts for remote management and automation that can be ran at any time after the virtual machine is booted. The same scripts you use to manage your on-premises servers will work with your servers in Windows Azure. Of course, we do provide a switch to disable this functionality on boot if Remote PowerShell is not desired.

Installing Windows Server Features Automatically

In the example below the new -WaitForBoot parameter is used with New-AzureVM. This switch tells the cmdlet to wait for the virtual machine to be in the RoleReady (booted) state before continuing execution. Once the virtual machine is ready the script calls the Get-AzureWinRMUri cmdlet to retrieve the connection string to execute a remote script against the virtual machine. The script block passed to Invoke-Command installs the Web-Server IIS and the related management tools.

A PowerShell scripter could easily extend this script to automatically deploy a custom web application or service with just a few additional lines of code.
Installing Windows Features using Remote PowerShell

# Using this script installs the generated cert into your local cert store which allows 
# PowerShell to verify it is communicating with the correct endpoint. 
# This REQUIRES PowerShell run Elevated
. "C:\Scripts\WAIaaSPS\RemotePS\InstallWinRMCert.ps1" 

$user = ""
$pwd = ""
$svcName = ""
$VMName = "webfe1" 
$location = "West US"

$credential = Get-Credential 

New-AzureVMConfig -Name $VMName -InstanceSize "Small" -ImageName $image |
                Add-AzureProvisioningConfig -Windows -AdminUsername $user -Password $pwd |
                Add-AzureEndpoint -Name "http" -Protocol tcp -LocalPort 80 -PublicPort 80 |
                New-AzureVM -ServiceName $svcName -Location $location -WaitForBoot 

# Get the RemotePS/WinRM Uri to connect to
$uri = Get-AzureWinRMUri -ServiceName $svcName -Name $VMName 

# Using generated certs – use helper function to download and install generated cert.
InstallWinRMCert $svcName $VMName 

# Use native PowerShell Cmdlet to execute a script block on the remote virtual machine
Invoke-Command -ConnectionUri $uri.ToString() -Credential $credential -ScriptBlock {
    $logLabel = $((get-date).ToString("yyyyMMddHHmmss"))
    $logPath = "$env:TEMP\init-webservervm_webserver_install_log_$logLabel.txt"
    Import-Module -Name ServerManager
    Install-WindowsFeature -Name Web-Server -IncludeManagementTools -LogPath $logPath
} 
Contents of InstallWinRMCert.ps1
function InstallWinRMCert($serviceName, $vmname)
{
    $winRMCert = (Get-AzureVM -ServiceName $serviceName -Name $vmname | select -ExpandProperty vm).DefaultWinRMCertificateThumbprint

    $AzureX509cert = Get-AzureCertificate -ServiceName $serviceName -Thumbprint $winRMCert -ThumbprintAlgorithm sha1

    $certTempFile = [IO.Path]::GetTempFileName()
    Write-Host $certTempFile
    $AzureX509cert.Data | Out-File $certTempFile

    # Target The Cert That Needs To Be Imported
    $CertToImport = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $certTempFile

    $store = New-Object System.Security.Cryptography.X509Certificates.X509Store "Root", "LocalMachine"
    $store.Certificates.Count
    $store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
    $store.Add($CertToImport)
    $store.Close()

    Remove-Item $certTempFile
}
Image and Disk Mobility

Windows Azure is an open computing platform and allows for the movement of your virtual machine disks between on-premises and the cloud. There are two optimized cmdlets that enable either you to upload your VHD or download it.

Uploading a VHD

The first example shows how to upload a VHD to Windows Azure. This can be a bootable OS disk or simply a data disk (remove -OS Windows for data disks). After Add-AzureDisk is called you could use the New-AzureVMConfig cmdlet or the management portal to provision a virtual machine that boots off of the uploaded VHD.

$source = "C:\vmstorage\myosdisk.vhd"
$destination = "https://<yourstorage>.blob.core.windows.net/vhds/myosdisk.vhd"

Add-AzureVhd -LocalFilePath $source -Destination $destination -NumberOfUploaderThreads 5
Add-AzureDisk -DiskName 'myosdisk' -MediaLocation $destination -Label 'mydatadisk' -OS Windows 
Downloading a VHD

Not only can you upload a disk to Windows Azure but it is also easy to download a VHD as well! The example below shows how you can save a VHD to the local file system ready to run on a Hyper-V enabled system. (Note: a virtual machine should not write to the VHD at the same time you are trying to download it).

$source = "https://<yourstorage>.blob.core.windows.net/vhds/myosdisk.vhd"
$destination = "C:\vmstorage\myosdisk.vhd"
Save-AzureVhd -Source $source -LocalFilePath $destination -NumberOfThreads 5 
VMDK Conversion and Migration to Windows Azure

If you have VMWare based virtual machines that you would like to migrate you can use the Microsoft Virtual Machine Converter Solution Accelerator to convert the disks to VHDs and then use the Add-AzureVHD cmdlet to upload the VHD and create a virtual machine in Windows Azure from it.

Copying a VHD across Windows Azure Regions
# Source VHD (West US)
$srcUri = "http://<yourweststorage>.blob.core.windows.net/vhds/myosdisk.vhd"      

# Target Storage Account (East US)
$storageAccount = "<youreaststorage>"
$storageKey = "<youreaststoragekey>"

$destContext = New-AzureStorageContext  –StorageAccountName $storageAccount `
                                        -StorageAccountKey $storageKey  

# Container Name
$containerName = "vhds"

New-AzureStorageContainer -Name $containerName -Context $destContext

$blob = Start-AzureStorageBlobCopy -srcUri $srcUri `
                                   -DestContainer $containerName `
                                   -DestBlob "testcopy1.vhd" `
                                   -DestContext $destContext   
                                    
$blob | Get-AzureStorageBlobCopyState 
Enhanced Security -AdminUserName is required for Windows (Breaking Change)

In order to protect you from unwanted attacks from connections attempting to use the dictionary on your password, we have made it mandatory to supply a username.
This change affects the New-AzureQuickVM and the Add-AzureProvisioningConfig cmdlets used for VM creation. Each will have a new –AdminUserName parameter that is now required.
Make sure you can remember it but do not use obvious names like Administrator or Admin.

High Memory Virtual Machine Support

The latest version of the WA PowerShell Cmdlets now support the new higher memory SKU sizes of A6 and A7 for larger workloads. For more information about Windows Azure compute sizes see the following: http://www.windowsazure.com/en-us/pricing/details/virtual-machines/.

high-mem-skus

Managing Availability Sets on Deployed VMs

We have also added the ability to specify availability set configuration for groups of virtual machines for highly available configurations. Previously, this could only be set at deployment time or post deployment from the Windows Azure Management Portal. For more information on availability sets see the following article:
http://www.windowsazure.com/en-us/manage/windows/common-tasks/manage-vm-availability/

Get-AzureVM -ServiceName "mywebsite" | Where {$_.Name -like "*web*"} | 
    Set-AzureAvailabilitySet -AvailabilitySetName "wfe-av-set" |
    Update-AzureVM
Wrapping Up

I hope you are excited about the new features in the Windows Azure PowerShell Cmdlets. If you would like to try this yourself you will need a subscription, to download the WA PowerShell Cmdlets and a short read on getting started.


• Lori MacVittie (@lmacvittie) prefaced her The Devops Fallacy post of 4/16/2013 to F5’s DevCentral blog with “On that which is seen and that which is not seen ...”:

imageWhen it comes to talking IT operations and financial considerations I tend to stay away from deep economic theories. I'm not Joe Weinman, after all.

But I happened upon (no, I don't recall how so don't even ask. The Internets, you see) an 1850s essay on political economics written by Frédéric Bastiat which used an analogy as the basis to explain his theory. Analogies are great because they're like pictures for grown ups and sometimes, pictures are necessary.

Coins and plant, isolated on white backgroundIn any case, this particular essay is often referred to as the Glazier's Fallacy (also known as the Parable of the Broken Window) and the story focuses on a broken window, 6 francs, and a whole lot of economic theory. What captured my attention was a nugget in the parable that applies fairly directly to operations and in particular the business value of devops.

Bastiat argues that, despite the silver-lining thought that says "oh, well, a broken window is bad but at least the glazier can stay in business", the broken window is actually bad because it prevents money from being spent elsewhere (and ultimately encouraging more economic opportunity).

Let us take a view of industry in general, as affected by this circumstance. The window being broken, the glazier's trade is encouraged to the amount of six francs: this is that which is seen.

If the window had not been broken, the shoemaker's trade (or some other) would have been encouraged to the amount of six francs: this is that which is not seen.

And if that which is not seen is taken into consideration, because it is a negative fact, as well as that which is seen, because it is a positive fact, it will be understood that neither industry in general, nor the sum total of national labour, is affected, whether windows are broken or not.

Now let us consider James B. [the shopkeeper whose window has been broken] himself. In the former supposition, that of the window being broken, he spends six francs, and has neither more nor less than he had before, the enjoyment of a window.

In the second, where we suppose the window not to have been broken, he would have spent six francs on shoes, and would have had at the same time the enjoyment of a pair of shoes and of a window.

Now, as James B. forms a part of society, we must come to the conclusion, that, taking it altogether, and making an estimate of its enjoyments and its labours, it has lost the value of the broken window.

Ignoring the politics, if we apply this same parable to operations and a misconfigured server (as opposed to a broken window) we start to see the value of not having to spend time fixing things that are broken. "Now, as James B forms a part of operations, we must come to the conclusion, that, taking it altogether, and making an estimate of its value and its labor, operations has lost the value of the misconfigured server."

In other words, the economic case for devops is based partly upon the reality that time spent fixing things is lost. It's a negative; it's not just that we gain the time when devops is applied and deployment lifecycles are made successfully repeatable. It's that we also gain what we had lost spending time tracking down errors and fixing them. "Enjoyment of the shoes and the window" in operations equates to "enjoyment of new value and a properly working server."

In other words, it's nearly a double gain for operations because that time that was spent fixing things is now spent on adding value and is not lost in troubleshooting.The value of devops is computed not just by the value it can add, but by continued value of the server working as expected.

That which is seen (the server) and that which is not seen (the new value that could be added were operations free to innovate).

We generally articulate the value of devops by saying "we'll have more time to be more responsive or innovate new services" but we forget to add the value of that server that continues to work as promised while we're innovating. That value remains and it actually is a positive gain because we aren't expensing time against it.

When we're trying to articulate the value of devops to the organization, we need to include both the sustained value of properly working systems as well as the new value added. Focusing on the positive impact and value to the business in terms of dollars and time (not always the same, as Bastiat theorizes) may help sway those still unconvinced of the value of devops.

And for those focusing (or starting to focus) on SDN, there's a similar argument regarding the positive gain of a more self-managing network in addition to new value added. Et tu, cloud. The general principle applies to all technology that enables systems to run smoothly on their own.

Food for thought if you're trying to justify getting a technology initiative like SDN, devops, or cloud funded and running into roadblocks.

Lori says in her Twitter bio that she’s now a “Mom ★ Grandmom.” Hard to believe!


Bill Hilf (@bill_hilf): Announcing Infrastructure Services GA and New Price Commitment [emphasis added] summarizes his The Power of ‘And’ post to the Windows Azure blog of 4/16/2013:

imageToday is an exciting day for Microsoft, Windows Azure and all of our customers around the world.  I am very pleased to announce the general availability of Windows Azure Infrastructure Services. This new service now makes it possible for customers to move applications into the cloud.

imageOur announcement today is a significant step in our cloud computing strategy, which has been influenced directly by our discussions with customers and partners around the world.  Throughout these conversations, one thing holds true in every discussion - enterprises know that success with the cloud lies in the power of “and.”  Customers don’t want to rip and replace their current infrastructure to benefit from the cloud; they want the strengths of their on-premises investments and the flexibility of the cloud. It’s not only about Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), it’s about Infrastructure Services and Platform Services and hybrid scenarios.  The cloud should be an enabler for innovation, and an extension of your organization’s IT fabric, not just a fancier way to describe cheap infrastructure and application hosting

Customers have also told me that they don’t want to have to choose either a low price or good performance; they want a low price and good performance. That’s why today we are also announcing a commitment to match Amazon Web Services prices for commodity services such as compute, storage and bandwidth.  This starts with reducing our GA prices on Virtual Machines and Cloud Services by 21-33%.  Regardless of how you choose to buy Windows Azure, you’ll get the benefit of this price reduction. As our operations GM Steven Martin said, “If you had concerns that Windows Azure was more expensive, we’re putting those concerns to rest today.”

imageBy listening to customer feedback, we learned a lot about the workloads you want to run. As part of our new Infrastructure Services release, we’ve added in new high memory VM instances (28GB/4 core and 56 GB/8 core) to run your most demanding workloads.  We also learned more about the apps you want to run so we’ve added in a number of new Microsoft validated instances to our list including SQL Server, SharePoint, BizTalk Server, and Dynamics NAV to name a few.

It’s gratifying to see our customers already using our unique hybrid solution to innovate. For example, automotive marketing and social media firm Digital Air Strike have utilized Windows Azure’s Infrastructure Services and Platform Services to create an instant feedback mechanism for all car purchases and service transactions for automotive giant General Motors. This enables GM to monitor the health of their customer relationships in near real time, providing deep and valuable business insights.

Digital Air Strike’s CMO recently told us that they’d looked at Amazon Web Services and other cloud providers, but concluded that “when you work for the enterprise, you have to choose Microsoft.” We’re honored by that statement, and it deepens our resolve to continue to lead the enterprise, not just with our world class cloud platform, but with decades of experience and unparalleled support – enterprise is in our DNA.  We’ll never tell you that a Microsoft app inside your virtual machine is “up the stack” and we don’t support it—we’ll support it and make sure you’re successful. And we’ll back it with monthly SLAs that are among the industry’s highest.

Another customer I recently spoke to is Telenor, a Norwegian telecommunications company who needed to upgrade to the latest SharePoint solution across 13 business units and 12 countries.  Traditional approaches would have exceeded their timeframe and budget, so they turned to Windows Azure, spun up their SharePoint 2013 farms and reduced their setup time from 3 months to two weeks, saving not only time, but money with a 70% cost reduction on their test environment.  For production, they will leverage the VM portability available between Windows Azure and Windows Server to move their final production deployment to their existing 3rd party hosting provider. Incredible time to market and no vendor lock-in.

Telenor’s great story is just one example on a growing list of more than 200,000 Windows Azure customers. More and more, we’re hearing from customers that our hybrid cloud approach delivers flexibility to develop and deploy apps the way businesses need. We want to ensure customers are set up for the future while working with what they have today, leveraging existing skills and infrastructure on premises and in Windows Azure.

We recognize customers have a choice, and that’s why today, we are making it as easy as possible to have it all – a complete hybrid cloud platform, great support, without a price barrier. Go to WindowsAzure.com today for a free trial, and experience the power of “and”. Also visit Scott Guthrie’s blog for a deep dive into Infrastructure Services.


The SQL Server Team (@SQLServer) suggested that you Develop and Test New SQL Server Apps, Scale Existing Apps and Unlock Hybrid Scenarios with Windows Azure Infrastructure Services in a 4/16/2013 post:

imageToday Microsoft announced the general availability of Windows Azure Infrastructure Services, which includes Virtual Machines and Virtual Networks to keep your Windows Azure connected to your on-premises infrastructure and applications.

imageWindows Azure Infrastructure Services provide the robust cloud infrastructure that is needed to run SQL Server and many of the applications that rely on SQL Server. With Windows Azure Virtual Machines, Virtual Networks, Data Sync services and full SQL Server compatibility you can now enable the following scenarios:

  • Develop & Test SQL Server Applications in Windows Azure

With full SQL Server compatibility you can develop your SQL Server application quickly, while reducing costs of provisioning additional hardware. In addition you can choose to deploy your new application in Windows Azure or back on-premises with minimal effort.

  • Move your Existing On-Premises SQL Server Applications

imageVirtual Machines offer many compute and memory configurations to choose from, you can find the one that fits your existing on-premises SQL Server application requirements. For example, your existing departmental SQL Server Line of Business applications that have already been virtualized are good candidates to move to Windows Azure. Once you have selected the appropriate configuration, Windows Azure makes moving your on-premises application easy with Azure tools, or if you have the latest version of System Center, you can use that as well to upload to Azure. Again, with full SQL Server compatibility you can utilize features like Transparent Data Encryption for database security, Full Text Search and AlwaysOn for high availability of your databases running in Virtual Machines.

  • Backup & Restore on-premises SQL Server Databases

Using a combination of Windows Azure Storage and Virtual Machines, you can create a cost effective way to backup and restore your on-premises SQL Server databases. With the recent SQL Server 2012 SP1 CU2 update, you can now backup and restore directly to a Windows Azure Storage URL, making it a one step process to backup and restore to Azure.

  • Unlock Hybrid SQL Server Scenarios

Windows Azure Virtual Machines and Virtual Networks unlock new hybrid application scenarios where an instance of SQL Server running on-premises is connected to a SQL Server instance running in Windows Azure Virtual Machines for additional on-demand scale and broader global reach with worldwide Azure datacenters. In addition, Windows Azure can uniquely provide you the type of support that is needed for hybrid applications by fully supporting both your on-premises SQL Server instances and SQL Server instances running in Windows Azure Virtual Machines. This significantly simplifies troubleshooting any hybrid scenarios where it may not be clear initially if the issue is with the on-premises instance or the cloud instance.

  • Create Multi-Tiered Cloud Applications

You can create multi-tiered cloud applications in Windows Azure where the core database tier utilizes SQL Server running in a Virtual Machine, and the application tier uses Windows Azure SQL Database (formerly known as SQL Azure) as a temporary database for the application tier to take advantage of its unique, dynamic scale-out capabilities.

SQL Server is the ideal data platform for your Hybrid IT environment as it enables end-to-end data platform scenarios that span on-premises and cloud. The scenarios above represent just the beginning of what you can do with SQL Server on-premises and Windows Azure Infrastructure Services in the cloud. Go to the Infrastructure Services page on WindowAzure.com to try these SQL Server scenarios today.

In coming weeks, you will find more in-depth blogs covering each of the above SQL Server scenarios as well as best practices when it comes to optimizing security, high availability and performance of your SQL Server applications running in Windows Azure Virtual Machines. You can also reference best practice documentation on MSDN for implementing the SQL Server scenarios in Windows Azure Virtual Machines. We hope you are as excited as we are about this release!


NBC News asserted “RightScale With Windows Azure Provides Users Comprehensive Automation and Control” in a preface to a RightScale Supports Windows Azure Infrastructure Services General Availability press release of 4/16/2013:

imageRightScale® Inc., a leader in cloud management, today announced enterprise support for Windows Azure in conjunction with the general availability of Windows Azure Infrastructure Services. Through the RightScale "Get Your App to Azure" program, RightScale empowers Windows Azure customers to accelerate application development in the cloud with the innovative RightScale multi-cloud management platform.

image"RightScale has the experience and the technology to get companies up and running quickly on Windows Azure Infrastructure Services," said Michael Crandell, CEO of RightScale. "Enterprises want IaaS choice. RightScale enables our customers to provision, configure, and automate individual servers or entire deployments on Windows Azure in minutes."

"Microsoft is committed to providing a full range of cloud services on Windows Azure, and RightScale's cloud management solution for Windows Azure is an example of how solutions can give customers additional value," said David Aiken, group product manager, Server and Tools Marketing of Microsoft. "Customers can take advantage of Windows Azure's openness and flexibility while using RightScale for faster onboarding and full-feature automation for their cloud deployments."

imageBenefits and features of RightScale and Windows Azure include:

  • Faster on-ramp: The combination of Windows Azure and RightScale provide IT professionals and developers a faster on-ramp to the Windows Azure cloud. Customers use customizable pre-built RightScale ServerTemplates™ for dynamic configuration, including an out-of-the-box scalable 3-tier .NET deployments. In addition, customers have access to the RightScale MultiCloud Marketplace, which includes pre-built cloud ServerTemplates, scripts, and architectures published by RightScale, ISV, and SI partners.
  • Automated cloud management: Using the RightScale configuration framework to facilitate efficient, automated provisioning and operations on Windows Azure. RightScale supports both Windows and Linux on Windows Azure and provides auto-scaling based on application specific custom metrics such as number of SQL Server queries.
  • Global deployments: RightScale provides global management of the six geographic regions provided by Windows Azure, enabling customers to move workloads among regions and architect for high availability and disaster recovery across regions.
  • Multi-cloud management: RightScale delivers a multi-cloud management view for customers to manage their Windows Azure public cloud deployments alongside other public or private cloud deployments -- using the same configuration and automation methodology for both cloud architectures.

To find out more about RightScale, the RightScale "Get Your App to Azure" program and Windows Azure, please visit www.rightscale.com/azure,


Walter Myers III posted Walkthrough to Configure System Center Management Pack for Windows Azure Fabric Preview for SCOM 2012 SP1 on 4/13/2013. The following excerpt skips about 20 feet of configuration steps and begins with monitoring activities:

… Let’s now return to the Monitoring view. Navigate to the Cloud Service State node under the Monitored Azure Resources folder. You will see here the cloud services you have selected for monitoring. Note that we came here instead of the Discovered Cloud Services node because this node is in the folder that only presents the monitored Azure resources we previously selected. The Discovered Cloud Services will simply display everything that has been discovered.

image

If we head back to the Discovered Cloud Services node, we will see all of the discovered cloud services. Note that just as above, I have three cloud services that are now being monitored, but two have a critical state. Let’s investigate the cause of the critical state.

image

Right-click on one of the critical state items, select Open in the popup menu, and then select Health Explorer for <cloud_service_name> in the secondary popup menu.

image

You will now see the Health Explorer, which can provide specific details regarding where you may have problems with a given object you are monitoring. By default, the view is scoped to unhealthy child monitors (note the scoping in the yellow section immediately above the Health Explorer nodes), and we can see at the bottom of the explorer that it has found my expired certificate which caused it to go into a critical state soon after the object was first monitored.

image

If  I go ahead and delete the scope to unhealthy child monitors by selecting the “x” next to “Scope is only unhealthy child monitors,” we find even more information on the cloud service, as seen below. Let’s go ahead and close the Health Explorer and go back to the Operations Console (this is a separate window, so closing it won’t close the Operations Console).

image

Another way to find out the problem is to check the alerts.  Let’s navigate to the Active Alerts node under the Windows Azure folder, as seen below. Notice that the alert tells us the service certificate for this hosted (cloud) service will expire soon. This is a great new feature of the SCOM 2012 SP1 Azure management pack! All I have to do is update the service certificate and my cloud service state will turn back to green. Impressive!

image

Let’s now navigate to the Storage State node under the Monitored Azure Resources folder. Note that we have a healthy storage account.

image

Finally, let’s navigate to the Virtual Machine State node under the Monitored Azure Resources folder. Note that we are monitoring healthy virtual machines. One thing to note here is that the management pack does not look at any resources running on the virtual machines such as SQL Server or IIS. This is purely for the overall health of the server itself, and not any installed features or server programs running on the server. If you want that functionality, you will have add these servers to a Virtual Network so that you have IP-level connectivity and can monitor them just as you would an on-premises server. That would also mean adding the appropriate service packs for programs you would like to monitor such as SQL Server 2008/2012 or IIS.

image

Let’s now go to the Health Explorer for one of the virtual machines, and clear the scope for only unhealthy child monitors. When we expand the Performance node, notice that there are two performance counter monitors (not rules) out of the box! My expert SCOM colleague Ian Smith cracked open the management pack and found that it is making calls to the fabric controller, which only monitors a limited number of performance counters for virtual machines. Thus we have only these two performance monitors currently but will investigate to see if there are more. On the right-hand side you can see that available memory is healthy for now, but the monitor will discover if it goes into a critical state and change the operational state accordingly.

image

If you want to see the threshold for the monitors, then right-click the monitor and select the Monitor Properties… item from the popup menu, as seen below.

image

You will now see the dialog for the monitor. Select the Configuration tab to view the threshold value, as seen below. Like any other monitor, you can select the Overrides tab and override the values to those you would like. Coolness.

image

That ends our walkthrough for now, but we will be looking into finding more information on the new management pack. Stay tuned for much more to come on SCOM 2012 SP1 and Azure. We’re just getting started!

image_thumb75_thumb6No significant articles today


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today


<Return to section navigation list>

Cloud Security, Compliance and Governance

David Linthicum (@DavidLinthicum) asserted “With cloud bringing even more complexity to enterprises, there needs to be a plan for centralized control of these resources” in a deck for his Don't wait till it's too late to prepare your cloud governance article of 4/16/2013 for InfoWorld’s Cloud Computing blog:

imageLast September, I noted the tipping point for cloud management is nigh to make the case that, as we migrate into cloud-based systems, control and governance of those systems should be a priority: "At some point, companies have to get serious about how they'll manage these cloud services, including monitoring use, uptime, security, governance, and compliance with SLAs."

image_thumb2As cloud deployment continues, the need for a centralized approach to control these new resources, along with the existing resources, becomes a more pressing matter for a few key reasons:

  • Many cloud deployments are carried out in support of devops. This typically means the use of many different public and private cloud services, as well as traditional development technology. Thus, there needs to be centralized control of these resources in the move from design to development, then to test, and finally to deployment.
  • The complexity brought by cloud computing requires that these sets of resources need to be abstracted in order to be used productively. This means placing governance or management systems between the people and systems consuming the resources or services and the hundreds -- perhaps thousands -- of services that will appear as enterprises move to cloud computing.

imageIf you're moving to cloud, what should you be thinking about right now? What's obvious is that we need to get ahead of the complexity. Create a strategy and a plan to manage an environment where the number of resources from public and private clouds will expand dramatically over the next several years. Look at existing and emerging technology to help, such as those that provide governance and resource management.

The core problem is that most in enterprise IT don't think this way. The rising complexity will push them in the direction of more governance and control, resulting in less productivity and less value. In other words, the typical IT organization will act too late. Don't be one of those.

The better approach is to create a plan right now to deal with the complexity before it emerges and changes. Invest in people, processes, and technology, making sure that, as you onboard cloud computing resources, they actually improve your organization.


<Return to section navigation list>

Cloud Computing Events

The San Francisco Bay Area Azure Developers group will host Glenn Blocks’s Mobile app development with Azure Mobile Services presentation on 5/1/2013 at 6:30 PM at Microsoft San Francisco (in Westfield Mall where Powell meets Market Street):

  • 835 Market Street
    Golden Gate Rooms - 7th Floor
    San Francisco, CA (map)

    We are in the same building as Westfield Mall; A good place to park (and probably cheapest) is the garage on Mission right across from Westfield

imageEase your mobile app development to the cloud powered by Javascript with Azure Mobile Services.

Javascript: It's not just for browsers anymore. If you are a Javascript developer today you are no longer confined to the browser frame, you can take those skills to mobile devices, servers and the cloud. The technology makes it possible, but it's not easy. You've got a lot of learning to do and a lot of things to worry about like data, identity, validation, push, scale and diagnostics to name a few. Azure Mobile Services is there to make it easy. It provides you a ton of backend services out of the box to help you cloud enable those client apps utilizing the client side Javascript skills you already have. If you are a node developer you can go even further. It doesn't stop there though, it includes client SDKs for IOS, Android, WIndows 8 and a newly announced library for Phonegap/HTML5 which you can use to reach any device anywhere. You have to just see it to believe it, and if you come to this talk you will.

imageBio: Glenn works on the Windows Azure team making sure it's a kick ass platform for Open Source development. When he's not developing products or with family you'll find him at a conference somewhere in the world, hacking away on some new thing, pairing up with whoever he can find, or tweeting in to the wee hours of the night as @gblock.


The San Francisco Bay Area Azure Developers group will host a Azure Cloud Numerics & F#: How I Learned to Stop Worrying and Love Big Data presentation at 6:30 PM on 4/23/2013 in the Microsoft San Francisco office:

We are in the same building as Westfield Mall, 7th Floor

imageThe promise of Big Data is obvious - more data, more information, and hopefully more insights. There is one problem, though: Big Data is Big, and performing analytics on large datasets comes with challenging issues. Where do you start when you can't even open your data on a single machine?

In this talk, we'll take a look at Microsoft Codename "Cloud Numerics". Currently in preview, Cloud Numerics is a collection of tools for modelling and analyzing data at scale. With Cloud Numerics, you can develop and debug on your desktop, and deploy a High-Perfomance Cluster on Azure when the hammer is needed. Its programming model handles gigantic arrays and matrices (dense and sparse) in a distributed manner, and it comes with a library of numeric methods that comes in handy for a variety of machine learning tasks. We'll demonstrate Cloud Numerics and F# in action, illustrating where we think it fits for the Data Scientist who needs Big Compute.

imageMathias Brandewinder has been writing software in C# and F# for years, and loving every minute of it, except maybe for a few release days. He is an F# MVP, and enjoys arguing about code and how to make it better, and gets very excited when discussing TDD or F#. His other professional interests are applied math and probability. If you want to know more about him, you can check out his blog at www.clear-lines.com/blog or find him on Twitter as @brandewinder.

---

Also, don't forget about MongoDB San Francisco coming up on May 10th. The conference sells out every year; but I believe they still have a few passes left

https://www.10gen.com/events/mongodb-san-francisco-2013


The Software and Information Industry Association (@SIIASoftware) will hold AATC (All About the Cloud) on May 7-9, 2013 at the Palace Hotel in San Francisco. From the press release:

imageThe eighth annual conference will bring together industry-leading executives, prominent analysts, venture capitalists and members of the media to discuss revolutionary ways software is being developed, utilized, and delivered, as well as emerging trends in the quickly changing landscape of cloud computing. Topics will include the rapid and disruptive evolution of data analytics, leveraging social media to achieve a competitive advantage, new monetization methods for next-generation cloud and mobile solutions, risk management and cybersecurity, and much more.  Keynote speakers will include Joe Weinman, Author of Cloudonomics, Adrian Cockcroft, Director of Cloud Architecture at Netflix, and Margaret Dawson, VP, Product Marketing & Cloud Evangelist, HP Cloud Services.

AATC will also feature:

  • NextGen: Spotlight on Cloud Innovators. SIIA’s 2013 NextGen companies, ranging from software application companies to technology providers, represent the most innovative companies transforming the software and services industry. Attendees will have an early look at cutting-edge technologies, as well as new strategic partnership and investment prospects.
  • CODiEs: Industry’s Only Peer-Selected Tech Awards.   The 2013 SIIA CODiE Award winners will be announced during a special awards presentation at All About the Cloud. The CODiE Awards, now in its 28th year, have recognized more than 1,000 software and information companies for achieving excellence and continue to be the industry's only peer-recognized awards program in the content, education, and software industries.  This year’s award program has been enhanced with new categories reflecting dramatic changes in the software and information industries.

For a complete schedule of events, visit: http://www.siia.net/aatc/2013/schedule.asp

About SIIA

SIIA is the leading association representing the software and digital content industries. SIIA represents approximately 700 member companies worldwide that develop software and digital information content.  Information technology (IT) and software security are critical issues to SIIA’s members, many of whom strive to develop safe, secure and state-of the-art products that effectively serve their commercial and government customers alike, while protecting their intellectual property. The SIIA Software Division provides a forum for companies developing the applications, services, infrastructure and tools that are driving the software and services industry forward. For further information, visit www.siia.net/software.

I’m surprised that no one from the Windows Azure team was on the speaker’s list as of 4/18/2013.


The Windows Azure Team (@WindowsAzure) announced on 4/16/2013 A Windows Azure Community Event on the net at 9:00 AM through 5:00 PM PST on 4/23/2013:

A Windows Azure Community Event

imageWe've received such a great response from the community about Windows AzureConf 2012 that we're proud to let you know the community conference is coming back this Spring.

imageOn April 23, 2013, Microsoft will be hosting Windows AzureConf, a free event for the Windows Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. Streamed live for an online audience on Channel 9, the event will allow you to see how developers just like you are using Windows Azure to develop robust, scalable applications on Windows Azure. Community members from all over the world will join Scott in the Channel 9 studios to present their own inventions and experiences. Whether you’re just learning Windows Azure or you've already achieved success on the platform, you won’t want to miss this special event.

Get Involved

image_thumb75_thumb8Want to get involved locally? There may be a local Windows AzureConf or Global Windows Azure Bootcamp event near you. Click here to see what's happening around the world in the Windows Azure community.

Register for Windows AzureConf

If you'd like to be reminded of the upcoming event and to receive updates via email about Windows AzureConf, please register using the form here below. Registration for the event isn't required for you to attend, but we'd love to send you some preliminary information about the event and to keep you informed as to how the event's progressing. Once you register, we'll give you an .ICS file you can download to block some time on your calendar to attend this free, live event online.


Robin Shahan (@RobinDotNet) invites San Francisco Bay Area developers to the Global Windows Azure Bootcamp SF to be held April 27, 2013 at Microsoft’s San Francisco office:

imageWelcome to Global Windows Azure Bootcamp San Francisco!

On April 27th, 2013, you’ll be joining a global event happening in over 50 locations worldwide on the same day!

This one day deep dive class will get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned.

bootcampAwesome. How much does it cost?

This event is FREE to the attendees. Gratis! Gratuite! Libero!

Even more awesome! What’s the catch?

There's no catch. Even the pros recommend this event.

image

How do I attend?

All you have to do is register. Keep in mind you will need to bring your own laptop to do the labs.

What do I need to bring?

You will need to bring your own laptop and have it preloaded with the software listed here. Please do the installation upfront as there will be no time to troubleshoot installations during the day.

We'll be supplying coffee in the morning, and lunch at midday.

Is this for beginners?

Yes and no. The local trainers will use the Windows Azure Training Kit to guide you to the basics. We’ll also be running a massive scalability experiment that may bring one of the Windows Azure datacenters down!

General Admission
$0.00  Register


<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Jeff Barr (@jeffbarr) announced Local Secondary Indexes for Amazon DynamoDB in a 4/18/2018 post:

imageYou can now create local secondary indexes for each of your Amazon DynamoDB tables. These indexes provide give you the power to query your tables in new ways, and can also increase retrieval efficiency.

What's a Local Secondary Index?
The local secondary index model builds on DynamoDB's existing key model.

Up until today you would have to select one of the following two primary key options when you create a table:

  • Hash - A strongly typed (string, number, or binary) value that uniquely identifies each item in a particular table. DynamoDB allows you to retrieve items by their hash keys.
  • Hash + Range - A pair of strongly typed items that collectively form a unique identifier for each item in a particular table. DynamoDB supports range queries that allow you to retrieve some or all of the items that match the hash portion of the primary key.

image_thumb111With today's release we are extending the Hash + Range option with support for up to five local secondary indexes per table. Like the primary key, the indexes must be defined when you create the table. Each index references a non-primary attribute, and enables efficient retrieval using a combination of the hash key and the specified secondary key.

You can also choose to project some or all of the table's other attributes into a secondary index. DynamoDB will automatically retrieve attribute values from the table or from the index as required. Projecting a particular attribute will improve retrieval speed and lessen the amount of provisioned throughput consumed, but will require additional storage space. Items within a secondary index are stored physically close to each other and in sorted order for fast query performance.

How Do I Create and Use a Local Secondary Index?
As I noted earlier, you must create your local secondary indexes when you create the DynamoDB table. Here is how you would create them in the AWS Management Console:

DynamoDB's existing Query API now supports the use of local secondary indexes. Your call must specify the table, the name of the index, the attributes you want to be returned, and any query conditions that you want to apply. We have examples in Java, PHP, and .NET / C#.

Costs and Provisioned Throughput
Let's talk about the implications of local secondary indexes on the DynamoDB cost structure.

Every secondary index means more work for DynamoDB. When you add, delete, or replace items in a table that has local secondary indexes, DynamoDB will use additional write capacity units to update the relevant indexes.

When you query a table that has one or more local secondary indexes, you need to consider two distinct cases:

For queries that use index keys and projected attributes, DynamoDB will read from the index instead of from the table and will compute the number of read capacity units accordingly. This can result in lower costs if there are less attributes in the index than in the table.

For index queries that read non-projected attributes, DynamoDB will need to read the table and the index. This will consume additional read capacity units.

Get Started Now
Local secondary indexes are available today in the US East (Northern Virginia), US West (Oregon and Northern California), South America (São Paulo), Europe (Ireland), and Asia Pacific (Singapore, Tokyo, and Sydney) Regions and you can start using them today.

My requests of the past few years for secondary indexes on Windows Azure tables have fallen on deaf ears. Perpetual implementation promises for this feature have gone unfulfilled.


Werner Vogels (@werner) chimed in with an Expanding the Cloud: Faster, More Flexible Queries with DynamoDB post on 4/18/2013:

imageToday, I’m thrilled to announce that we have expanded the query capabilities of DynamoDB. We call the newest capability Local Secondary Indexes (LSI). While DynamoDB already allows you to perform low-latency queries based on your table’s primary key, even at tremendous scale, LSI will now give you the ability to perform fast queries against other attributes (or columns) in your table. This gives you the ability to perform richer queries while still meeting the low-latency demands of responsive, scalable applications.

imageOur customers have been asking us to expand the query capabilities of DynamoDB and we’re excited to see how they use LSI. Milo Milovanovic, Washington Post Principal Systems Architect reports that “database performance and scalability are critical for delivering new services to our 34+ million readers on any device. For this reason, we chose DynamoDB to power our popular Social Reader app and site experience on socialreader.com. The fast and flexible query performance that local secondary indexes provide will allow us to further optimize our social intelligence, and continue to improve our readers’ experiences.”

As I discussed in a recent blog post, after years of building highly scalable and highly available e-commerce and cloud computing services, Amazon has come to realize that relational databases should only be used when an application truly needs the complex query, table join and transaction capabilities of a full-blown relational database. In all other cases, when such relational features are not needed, we default to DynamoDB as it offers a more available, more scalable and ultimately a lower cost solution.

When DynamoDB launched last year, it offered simple but powerful query capabilities. Customers could choose from two types of keys for primary index querying: Simple Hash Keys and Composite Hash Key / Range Keys:

  • Simple Hash Key gives DynamoDB the Distributed Hash Table abstraction. The key is hashed over the different partitions to optimize workload distribution. For more background on this please read the original Dynamo paper.

  • Composite Hash Key with Range Key allows the developer to create a primary key that is the composite of two attributes, a “hash attribute” and a “range attribute.” When querying against a composite key, the hash attribute needs to be uniquely matched but a range operation can be specified for the range attribute: e.g. all orders from Werner in the past 24 hours, or all games played by an individual player in the past 24 hours.

With LSI we expand DynamoDB’s existing query capabilities with support for more complex queries. Customers can now create indexes on non-primary key attributes and quickly retrieve records within a hash partition (i.e., items that share the same hash value in their primary key).

Since we launched DynamoDB, we have seen many database customers migrate their apps from traditional sharded relational database deployments to DynamoDB. Some of these developers who were used to the broad query flexibility offered by relational databases asked us to add more query functionality to DynamoDB. These developers will now find LSI to be useful and familiar, as it enables them to index non-primary key attributes and quickly query records within a hash partition. LSI enables more applications to benefit from DynamoDB’s scalability, availability, resilience, low cost and minimal operational overhead.

What are Local Secondary Indexes (LSI)?

As an example, let’s say that your social gaming application tracks player activity. Database scalability is important for social games, which can attract tens of millions of players soon after launch. Consistent, rock solid low-latency database performance is important too, because social games are highly interactive. Let’s examine how DynamoDB would support a social game, and then add the benefit of local secondary indexes.

DynamoDB stores information as database tables, which are collections of individual items. Each item is a collection of data attributes. The items are analogous to rows in a spreadsheet, and the attributes are analogous to columns. Each item is uniquely identified by a primary key, which is composed of its first two attributes, called the hash and range.

DynamoDB queries refer to the hash and range attributes of items you’d like to access. Local secondary indexes let you query for hash keys together with other attributes besides the range key. LSI queries are local in the sense they always refer to the same hash key as standard queries.

Based on the design of your game, you might decide to record each player’s final score for each game he completes. You would track at least three pieces of data:

In DynamoDB, your Player Activity table might look like this:

Suppose you always want to show players a history of the last 10 games they played. This is a natural fit for DynamoDB. By setting up a DynamoDB table with PlayerName as the hash key and GameStartTime as the range key, you can quickly run queries like: “Show me the last 10 games played by John”. However, once you set up your table like this, you couldn’t run efficient queries on other attributes like “Score”. That was before LSI. Now, you can use LSI to define a secondary index on the “Score” attribute and quickly run queries like “Show me John’s all-time top 5 scores.” The query result is automatically ordered by score.

With LSI, your application can get the data it needs much more quickly and efficiently than ever before. No more downloading and sorting through results. By using LSI, you can now push that work to DynamoDB. Crucially, it does so while protecting the scalability and performance that our customers demand. Tables with one or more LSI’s will exhibit the same latency and throughput performance as those without any indexes.

Start with DynamoDB

The enhanced query flexibility that local secondary indexes provide means DynamoDB can support an even broader range of workloads. As I mentioned earlier, since scalability and availability of our apps are of critical importance at Amazon, we have already come to start with DynamoDB as the default choice for every application that does not require the flexibility of relational databases like Oracle or MySQL. Customers tell us they’re adopting the same practice, particularly in the areas of digital advertising, social gaming and connected device applications where high availability, seamless scalability, predictable performance and low latency are very critical.

Valentino Volonghi, Chief Architect of retargeting platform AdRoll, says “we use DynamoDB to bid on more than 7 billion impressions per day on the Web and FBX. AdRoll’s bidding system accesses more than a billion cookie profiles stored in DynamoDB, and sees uniform low-latency response. In addition, the availability of DynamoDB in all AWS regions allows our lean team to meet the rigorous low latency demands of real-time bidding in countries across the world without having to worry about infrastructure management.” In the past I have also highlighted other advertising applications from customers like Madwell and Shazam where seamless scale, high availability, predictable performance and low latency are very important.

Ankur Bulsara, CTO of the Scopely social gaming platform, says LSI will enable his team to deploy DynamoDB even more broadly. “We default to DynamoDB wherever we can, and also use MySQL for some query types,” he says. “We’re very excited that local secondary indexes will allow us to further remove traditional RDMSes from our ever-growing stack. DynamoDB is the future, and with LSI, the future is very bright.” In the past, I have highlighted many other gaming customers such as Electronic Arts and Halfbrick Studios. Gaming customers value DynamoDB’s seamless scale, since successful games can scale from a few users to tens of millions of users in a matter of weeks.

Today, local secondary indexes must be defined at the time you create your DynamoDB tables. In the future, we plan to provide you with an ability to add or drop LSI for existing tables. If you want to equip an existing DynamoDB table to local secondary indexes immediately, you can export the data from your existing table using Elastic Map Reduce, and import it to a new table with LSI.

You can get started with DynamoDB and Local Secondary Indexes right away with the DynamoDB free tier – LSI is available today in all AWS regions except GovCloud.

For more information, please see the appropriate topics in the Amazon DynamoDB developer guide.


<Return to section navigation list>

0 comments: