Saturday, June 30, 2012

Windows Azure and Cloud Computing Posts for 6/26/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 6/30/2012 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

The Windows Azure Storage Team posted Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS on 6/12/2012 (missed when posted due to MEETAzure traffic):

imageWe’re excited to announce that, as part of version 2012-02-12, we have introduced Table Shared Access Signatures (SAS), Queue SAS and updates to Blob SAS. In this blog, we will highlight usage scenarios for these new features along with sample code using the Windows Azure Storage Client Library v1.7.1, which is available on GitHub.

Shared Access Signatures allow granular access to tables, queues, blob containers, and blobs. A SAS token can be configured to provide specific access rights, such as read, write, update, delete, etc. to a specific table, key range within a table, queue, blob, or blob container; for a specified time period or without any limit. The SAS token appears as part of the resource’s URI as a series of query parameters. Prior to version 2012-02-12, Shared Access Signature could only grant access to blobs and blob containers.

SAS Update to Blob in version 2012-02-12

In the 2012-02-12 version, Blob SAS has been extended to allow unbounded access time to a blob resource instead of the previously limited one hour expiry time for non-revocable SAS tokens. To make use of this additional feature, the sv (signed version) query parameter must be set to "2012-02-12" which would allow the difference between se (signed expiry, which is mandatory) and st (signed start, which is optional) to be larger than one hour. For more details, refer to the MSDN documentation.

Best Practices When Using SAS

The following are best practices to follow when using Shared Access Signatures.

  1. Always use HTTPS when making SAS requests. SAS tokens are sent over the wire as part of a URL, and can potentially be leaked if HTTP is used. A leaked SAS token grants access until it either expires or is revoked.
  2. Use server stored access policies for revokable SAS. Each container, table, and queue can now have up to five server stored access policies at once. Revoking one of these policies invalidates all SAS tokens issued using that policy. Consider grouping SAS tokens such that logically related tokens share the same server stored access policy. Avoid inadvertently reusing revoked access policy identifiers by including a unique string in them, such as the date and time the policy was created.
  3. Don’t specify a start time or allow at least five minutes for clock skew. Due to clock skew, a SAS token might start or expire earlier or later than expected. If you do not specify a start time, then the start time is considered to be now, and you do not have to worry about clock skew for the start time.
  4. Limit the lifetime of SAS tokens and treat it as a Lease. Clients that need more time can request an updated SAS token.
  5. Be aware of version: Starting 2012-02-12 version, SAS tokens will contain a new version parameter (sv). sv defines how the various parameters in the SAS token must be interpreted and the version of the REST API to use to execute the operation. This implies that services that are responsible for providing SAS tokens to client applications for the version of the REST protocol that they understand. Make sure clients understand the REST protocol version specified by sv when they are given a SAS to use.
Table SAS

SAS for table allows account owners to grant SAS token access by defining the following restriction on the SAS policy:

1. Table granularity: users can grant access to an entire table (tn) or to a table range defined by a table (tn) along with a partition key range (startpk/endpk) and row key range (startrk/endrk).

To better understand the range to which access is granted, let us take an example data set:

image

The permission is specified as range of rows from (starpk,startrk) until (endpk, endrk).

Example 1: (starpk,startrk) =(,) (endpk, endrk)=(,)
Allowed Range = All rows in table

Example 2: (starpk,startrk) =(PK002,) (endpk, endrk)=(,)
Allowed Range = All rows starting from row # 301

Example 3: (starpk,startrk) =(PK002,) (endpk, endrk)=(PK002,)
Allowed Range = All rows starting from row # 301 and ending at row # 600

Example 3: (starpk,startrk) =(PK001,RK002) (endpk, endrk)=(PK003,RK003)
Allowed Range = All rows starting from row # 2 and ending at row # 603.
NOTE: The row (PK002, RK100) is accessible and the row key limit is hierarchical and not absolute (i.e. it is not applied as startrk <= rowkey <= endrk).

2. Access permissions (sp): user can grant access rights to the specified table or table range such as Query (r), Add (a), Update (u), Delete (d) or a combination of them.

3. Time range (st/se): users can limit the SAS token access time. Start time (st) is optional but Expiry time (se) is mandatory, and no limits are enforced on these parameters. Therefore a SAS token may be valid for a very large time period.

4. Server stored access policy (si): users can either generate offline SAS tokens where the policy permissions described above is part of the SAS token, or they can choose to store specific policy settings associated with a table. These policy settings are limited to the time range (start time and end time) and the access permissions. Stored access policy provides additional control over generated SAS tokens where policy settings could be changed at any time without the need to re-issue a new token. In addition, revoking SAS access would become possible without the need to change the account’s key.

For more information on the different policy settings for Table SAS and the REST interface, please refer to the SAS MSDN documentation.

Though non-revocable Table SAS provides large time period access to a resource, we highly recommend that you always limit its validity to a minimum required amount of time in case the SAS token is leaked or the holder of the token is no longer trusted. In that case, the only way to revoke access is to rotate the account’s key that was used to generate the SAS, which would also revoke any other SAS tokens that were already issued and are currently in use. In cases where large time period access is needed, we recommend that you use a server stored access policy as described above.

Most Shared Access Signature usage falls into two different scenarios:

  1. A service granting access to clients, so those clients can access their parts of the storage account or access the storage account with restricted permissions. Example: a Windows Phone app for a service running on Windows Azure. A SAS token would be distributed to clients (the Windows Phone app) so it can have direct access to storage.
  2. A service owner who needs to keep his production storage account credentials confined within a limited set of machines or Windows Azure roles which act as a key management system. In this case, a SAS token will be issued on an as-needed basis to worker or web roles that require access to specific storage resources. This allows services to reduce the risk of getting their keys compromised.

Along with the different usage scenarios, SAS token generation usually follows the models below:

  • A SAS Token Generator or producer service responsible for issuing SAS tokens to applications, referred to as SAS consumers. The SAS token generated is usually for limited amount of time to control access. This model usually works best with the first scenario described earlier where a phone app (SAS consumer) would request access to a certain resource by contacting a SAS generator service running in the cloud. Before the SAS token expires, the consumer would again contact the service for a renewed SAS. The service can refuse to produce any further tokens to certain applications or users, for example in the scenario where a user’s subscription to the service has expired. Diagram 1 illustrates this model.

clip_image002

Diagram 1: SAS Consumer/Producer Request Flow

  • The communication channel between the application (SAS consumer) and SAS Token Generator could be service specific where the service would authenticate the application/user (for example, using OAuth authentication mechanism) before issuing or renewing the SAS token. We highly recommend that the communication be a secure one in order to avoid any SAS token leak. Note that steps 1 and 2 would only be needed whenever the SAS token approaches its expiry time or the application is requesting access to a different resource. A SAS token can be used as long as it is valid which means multiple requests could be issued (steps 3 and 4) before consulting back with the SAS Token Generator service.
  • A one-time generated SAS token tied to a signed identifier controlled as part of a stored access policy. This model would work best in the second scenario described earlier where the SAS token could either be part of a worker role configuration file, or issued once by a SAS token generator/producer service where maximum access time could be provided. In case access needs to be revoked or permission and/or duration changed, the account owner can use the Set Table ACL API to modify the stored policy associated with issued SAS token. …

The post continues with sample C# code.


Andrew Edwards and Brad Calder of the Windows Azure Storage Team posted Exploring Windows Azure Drives, Disks, and Images on 6/27/2012:

imageWith the preview of Windows Azure Virtual Machines, we have two new special types of blobs stored in Windows Azure Storage: Windows Azure Virtual Machine Disks and Window Azure Virtual Machine Images. And of course we also have the existing preview of Windows Azure Drives. In the rest of this post, we will refer to these as storage, disks, images, and drives. This post explores what drives, disks, and images are and how they interact with storage.

Virtual Hard Drives (VHDs)

Drives, disks, and images are all VHDs stored as page blobs within your storage account. There are actually several slightly different VHD formats: fixed, dynamic, and differencing. Currently, Windows Azure only supports the format named ‘fixed’. This format lays the logical disk out linearly within the file format, such that disk offset X is stored at blob offset X. At the end of the blob, there is a small footer that describes the properties of the VHD. All of this stored in the page blob adheres to the standard VHD format, so you can take this VHD and mount it on your server on-premises if you choose to. Often, the fixed format wastes space because most disks have large unused ranges in them. However, we store our ‘fixed’ VHDs as a page blob, which is a sparse format, so we get the benefits of both the ‘fixed’ and ‘expandable’ disks at the same time.

Uploading VHDs to Windows Azure Storage

You can upload your VHD into your storage account to use it for either PaaS or IaaS. When you are uploading your VHD into storage, you will want to use a tool that understands that page blobs are sparse, and only uploads the portions of the VHD that have actual data in them. Also, if you have dynamic VHDs, you want to use a tool that will convert your dynamic VHD into a fixed VHD as it is doing the upload. CSUpload will do both of these things for you, and it is included as part of the Windows Azure SDK.

Persistence and Durability

Since drives, disks, and images are all stored in storage, your data will be persisted even when your virtual machine has to be moved to another physical machine. This means your data gets to take advantage of the durability offered by the Windows Azure Storage architecture, where all of your non-buffered and flushed writes to the disk/drive are replicated 3 times in storage to make it durable before returning success back to your application.

Drives (PaaS)

Drives are used by the PaaS roles (Worker Role, Web Role, and VM Role) to mount a VHD and assign a drive letter. There are many details about how you use these drives here. Drives are implemented with a kernel mode driver that runs within your VM, so your disk IO to and from the drive in the VM will cause network IO to and from the VM to your page blob in Windows Azure Storage. The follow diagram shows the driver running inside the VM, communicating with storage through the VM’s virtual network adapter.

azuredrive

PaaS roles are allowed to mount up to 16 drives per role.

Disks (IaaS)

When you create a Windows Azure Virtual Machine, the platform will attach at least one disk to the VM for your operating system disk. This disk will also be a VHD stored as a page blob in storage. As you write to the disk in the VM, the changes to the disk will be made to the page blob inside storage. You can also attach additional disks to your VM as data disks, and these will be stored in storage as page blobs as well.

Unlike for drives, the code that communicates with storage on behalf of your disk is not within your VM, so doing IO to the disk will not cause network activity in the VM, although it will cause network activity on the physical node. The following diagram shows how the driver runs in the host operating system, and the VM communicates through the disk interface to the driver, which then communicates through the host network adapter to storage.

azuredrive

There are limits to the number of disks a virtual machine can mount, varying from 16 data disks for an extra-large virtual machine, to one data disk for an extra small virtual machine. Details can be found here.

IMPORTANT: The Windows Azure platform holds an infinite lease on all the page blobs that it considers disks in your storage account so that you don’t accidently delete the underlying page blob, container, nor the storage account while the VM is using the VHD. If you want to delete the underlying page blob, the container it is within, or the storage account, you will need to detach the disk from the VM first as shown here:

detatchdisk

And then select the disk you want to detach and then delete:

detachdiskfromvm

Then you need to remove the disk from the portal:

vmdisks

and then you can select ‘delete disk’ from the bottom of the window:

deletedisk

Note: when you delete the disk you are not deleting the disk (VHD page blob) in your storage account. You are only disassociating it from the images that can be used for Windows Azure Virtual Machines. After you have done all of the above, you will be able to delete the disk from your storage account, using Windows Azure Storage REST APIs or storage explorers.

Images (IaaS)

Windows Azure uses the concept of an “Image” to describe a template VHD that can be used to create one or more Virtual Machines. Windows Azure and some partners provide images that can be used to create Virtual Machines. You can also create images for yourself by capturing an image of an existing Windows Azure Virtual Machine, or you can upload a sysprep’d image to your storage account. An image is also in the VHD format, but the platform will not write to the image. Instead, when you create a Virtual Machine from an image, the system will create a copy of that image’s page blob in your storage account, and that copy will be used for the Virtual Machine’s operating system disk.

IMPORTANT: Windows Azure holds an infinite lease on all the page blobs, the blob container and the storage account that it considers images in your storage account. Therefore, to delete the underlying page blob, you need to delete the image from the portal by going to the “Virtual Machines” section, clicking on “Images”:

vmimages

Then you select your image and press “Delete Image” at the bottom of the screen. This will disassociate the VHD from your set of registered images, but it does not delete the page blob from your storage account. At that point, you will be able to delete the image from your storage account.

Temporary Disk

There is another disk present in all web roles, worker roles, VM Roles, and Windows Azure Virtual Machines, called the temporary disk. This is a physical disk on the node that can be used for scratch space. Data on this disk will be lost when the VM is moved to another physical machine, which can happen during upgrades, patches, and when Windows Azure detects something is wrong with the node you are running on. The sizes offered for the temporary disk are defined here.

The temporary disk is the ideal place to store your operating system’s pagefile.

IMPORTANT: The temporary disk is not persistent. You should only write data onto this disk that you are willing to lose at any time.

Billing

Windows Azure Storage charges for Bandwidth, Transactions, and Storage Capacity. The per-unit costs of each can be found here.

Bandwidth

We recommend mounting drives from within the same location (e.g., US East) as the storage account they are stored in, as this offers the best performance, and also will not incur bandwidth charges. With disks, you are required to use them within the same location the disk is stored.

Transactions

When connected to a VM, disk IOs from both drives and disks will be satisfied from storage (unless one of the layers of cache described below can satisfy the request first). Small disk IOs will incur one Windows Azure Storage transaction per IO. Larger disk IOs will be split into smaller IOs, so they will incur more transaction charges. The breakdown for this is:

  • Drives
    • IO < 2 megabytes will be 1 transaction
    • IO >= 2 megabytes will be broken into transactions of 2MBs or smaller
  • Disks
    • IO < 128 kilobytes will be 1 transaction
    • IO >= 128 kilobytes will be broken into transactions of 128KBs or smaller

In addition, operating systems often perform a little read-ahead for small sequential IOs (typically less than 64 kilobytes), which may result in larger sized IOs to drives/disks than the IO size being issued by the application. If the prefetched data is used, then this can result in fewer transactions to your storage account than the number of IOs issued by your application.

Storage Capacity

Windows Azure Storage stores pages blobs and thus VHDs in sparse format, and therefore only charges for data within the VHD that has actually been written to during the life of the VHD. Therefore, we recommend using ‘quick format’ because this will avoid storing large ranges of zeros within the page blob. When creating a VHD you can choose the quick format option by specifying the below:

quickformat

It is also important to note that when you delete files within the file system used by the VHD, most operating systems do not clear or zero these ranges, so you can still be paying capacity charges within a blob for the data that you deleted via a disk/drive.

Caches, Caches, and more Caches

Drives and disks both support on-disk caching and some limited in-memory caching. Many layers of the operating system as well as application libraries do in-memory caching as well. This section highlights some of the caching choices you have as an application developer.

Caching can be used to improve performance, as well as to reduce transaction costs. The following table outlines some of the caches that are available for use with disks and drives. Each is described in more detail below the table.

image

FileStream (applies to both disks and drives)

.NET framework’s FileStream class will cache reads and writes in memory to reduce IOs to the disk. Some of the FileStream constructors take a cache size, and others will choose the default 8k cache size for you. You can not specify that the class use no memory cache, as the minimum cache size is 8 bytes. You can force the buffer to be written to disk by calling the FileStream.Flush(bool) API.

Operating System Caching (applies to both disks and drives)

The operating system itself will do in-memory buffering for both reads and writes, unless you explicitly turn it off when you open a file using FILE_FLAG_WRITE_THROUGH and/or FILE_FLAG_NO_BUFFERING. An in-depth discussion of the in memory caching behavior of windows is available here.

Windows Azure Drive Caches

Drives allow you to choose whether to use the node’s local temporary disk as a read cache, or to use no cache at all. The space for a drive’s cache is allocated from your web role or worker role’s temporary disk. This cache is write-through, so writes are always committed immediately to storage. Reads will be satisfied either from the local disk, or from storage.

Using the drive local cache can improve sequential IO read performance when the reads ‘hit’ the cache. Sequential reads will hit the cache if:

  1. The data has been read before. The data is cached on the first time it is read, not on first write.
  2. The cache is large enough to hold all of the data.

Access to the blob can often deliver a higher rate of random IOs than the local disk. However, these random IOs will incur storage transaction costs. To reduce the number of transactions to storage, you can use the local disk cache for random IOs as well. For best results, ensure that your random writes to the disk are 8KB aligned, and the IO sizes are in multiples of 8KB.

Windows Azure Virtual Machine Disk Caches

When deploying a Virtual Machine, the OS disk has two host caching choices:

  1. Read/Write (Default) – write back cache
  2. Read - write through cache

When you setup a data disk on a virtual machine, you get three host caching choices:

  1. Read/Write – write back cache
  2. Read – write through cache
  3. None (Default)

The type of cache to use for data disks and the OS disk is not currently exposed through the portal. To set the type of host caching, you must either use the Service Management APIs (either Add Data Disk or Update Data Disk) or the Powershell commands (Add-AzureDataDisk or Set-AzureDataDisk).

The read cache is stored both on disk and in memory in the host OS. The write cache is stored in memory in the host OS.

WARNING: If your application does not use FILE_FLAG_WRITE_THROUGH, the write cache could result in data loss because the data could be sitting in the host OS memory waiting to be written when the physical machine crashes unexpectedly.

Using the read cache will improve sequential IO read performance when the reads ‘hit’ the cache. Sequential reads will hit the cache if:

  1. The data has been read before.
  2. The cache is large enough to hold all of the data.

The cache’s size for a disk varies based on instance size and the number of disks mounted. Caching can only be enabled for up to four data disks.

No Caching for Windows Azure Drives and VM Disks

Windows Azure Storage can provide a higher rate of random IOs than the local disk on your node that is used for caching. If your application needs to do lots of random IOs, and throughput is important to you, then you may want to consider not using the above caches. Keep in mind, however, that IOs to Windows Azure Storage do incur transaction costs, while IOs to the local cache do not.

To disable your Windows Azure Drive cache, pass ‘0’ for the cache size when you call the Mount() API.

For a Virtual Machine data disk the default behavior is to not use the cache. If you have enabled the cache on a data disk, you can disable it using the Update Data Disk service management API, or the Set-AzureDataDisk powershell command.

For a Virtual Machine operating system disk the default behavior is to use the cache. If your application will do lots of random IOs to data files, you may want to consider moving those files to a data disk which has the caching turned off.


David Linthicum (@DavidLinthicum) asserted “By hemming and hawing, you're retrenching further into the data center and missing out on tremendous business benefits” in a deck for his Your corporate data needs to be in the public cloud -- starting now post of 6/29/2012 to InfoWorld’s Cloud Computing blog:

imageLooking for your first cloud computing project? Chances are you're considering a very small, very low-risk application to create on a public PaaS or IaaS cloud provider.

I get the logic: It's a low-value application. If the thing tanks or your information is hacked, no harm, no foul. However, I assert that you could move backward by hedging your bets, retrenching further and further into the data center and missing out on the game-changing advantages of the cloud.

imageYou need to bite the bullet, update that résumé (in case your superiors don't agree), and push your strategic corporate data to the public cloud.

Using the public cloud lets you leverage this data in new ways, thanks to new tools -- without having to pay millions of dollars for new infrastructure to support the database processing. When you have such inexpensive capacity, you'll figure out new ways to analyze your business using this data, and that will lead to improved decisions and -- call me crazy -- a much better business. Isn't that the objective?

Of course, the downside is that your data could be carted off by the feds in a data center raid or hacked through an opening your cloud provider forgot to close. Right? Wrong. The chances of those events (or similar events) occurring are very slim. Indeed, your data is more vulnerable where it now exists, but you have a false sense of security because you can hug your servers.

If you're playing with the public clouds just to say you're in them, while at the same time avoiding any potential downside, you're actually doing more harm than good. Cloud technology has evolved in the last five years, so put aside those old prejudices and assumptions. Now is the time to take calculated risks and get some of your data assets out on the cloud. Most of the Global 2000 will find value there.


Denny Lee (@dennylee) posted To the Cloud…and Beyond! on 6/28/2012:

imageAfter all the years of working on enterprise Analysis Services implementations, there were definitely some raised eyebrows when I had started running around with my MacBook Air on the merits of Hadoop – and Hadoop in the Cloud for that matter.

It got a little worse when I was hinting at the shift to Big Data:

It sounded strange until we had announced during the PASS 2011 Day One Keynote which I also called out in my post Connecting PowerPivot to Hadoop on Azure – Self Service BI to Big Data in the Cloud.

The reason for my personal interest in Big Data isn’t just because my web analytics background during my days at digiMine or Microsoft adCenter. In fact it was spurned by my years of working on exceedingly complex DW and BI implementations during the awesome craziness as part of the SQL Customer Advisory Team.

Source: https://twitter.com/ftgfop1/status/213309905929637888/photo/1

The uber-examples of the importance of Big Data and BI working together include:

  • Yahoo! TAO is the largest Analysis Services cube as called by Scott Burke, SVP of User Data and Analytics, at this year’s Hadoop Summit.
  • Dave Mariani (@dmariani) and my session “How Klout is changing the landscape of social media with Hadoop and BI” – more info at: http://dennyglee.com/2012/05/30/sql-bi-at-hadoop-summit-awesomesauce/

The key theme of our session was simply that:

Hadoop and BI are better together

imageSaying all of this, after this fun ride, I am both excited and sad to announce that I will be leaving the SQL Customer Advisory Team and joining the SQL BI organization. It’s pretty cool opportunity as I will get to live the theme of Hadoop and BI are better together by helping to build some internet scale Hadoop and BI systems – and all within the Cloud! I will reveal more later, eh?!

Meanwhile, I will still be blogging and running around talking about Hadoop and BI – so keep on pinging me, eh?! And yes, SSAS Maestros is still very much going to be continuing – its its new home as part of the SQL BI Org.


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Gregory Leake posted Data Series: SQL Server in Windows Azure Virtual Machine vs. SQL Database to the Windows Azure blog on 6/26/2012:

Two weeks ago we announced many new upcoming Windows Azure features that are now in public preview. One of these is the new Windows Azure Virtual Machine (VM), which makes it very easy to deploy dedicated instances of SQL Server in the Windows Azure cloud. You can read more about this new capability here. SQL Server running in a Windows Azure VM can serve as the backing database to both cloud-based applications, as well as on-premise applications, much like Windows Azure SQL Database (formerly known as “SQL Azure”). This capability is our implementation of “Infrastructure as a Service” (IaaS).

imageWindows Azure SQL Database, which is a commercially released service, is our implementation of “Platform as a Service” (PaaS) for a relational database service in the cloud. The introduction of new IaaS capabilities for Windows Azure leads to an important question: when should I choose Windows Azure SQL Database, and when should I choose SQL Server running in a Windows Azure VM when deploying a database to the cloud? In this blog post, we provide some early information to help customers understand some of the differences between the two options, and their relative strengths and core scenarios. Each of these choices might be a better fit than the other depending on what kind of problem you want to solve.

The key criteria in determining which of these two cloud database choices will be the better option for a particular solution are:

  • Full compatibility with SQL Server box product editions
  • Control vs. cost
  • Database scale-out requirements

In general, the two options are optimized for different purposes:

  • SQL Database is optimized to reduce costs to the minimum amount possible. It provides a very quick and easy way to build a scale-out data tier in the cloud, while lowering ongoing administration costs since customers do not have to provision or maintain any virtual machines or database software.
  • SQL Server running in a Windows Azure VM is optimized for the best compatibility with existing applications and for hybrid applications. It provides full SQL Server box product features and gives the administrator full control over a dedicated SQL Server instance and cloud-based VM.
Compatibility with SQL Server Box Product Editions

From a features and compatibility standpoint, running SQL Server 2012 (or earlier edition) in a Windows Azure VM is no different than running full SQL Server box product in a VM hosted in your own data center: it is full box product, and the features supported just depend on the edition of SQL Server you deploy (note that AlwaysOn availability groups are targeted for support at GA but not the current preview release; and that Windows Clustering will not be available at GA). The advantage of running SQL Server in a Windows Azure VM is that you do not need to buy or maintain any infrastructure whatsoever, leading to lower TCO.

Existing SQL Server-based applications will “just work” with SQL Server running in a Windows Azure VM, as long as you deploy the correct edition. If your application requires full SQL Server Enterprise Edition, your existing applications will work as long as you deploy SQL Server Enterprise Edition to the Windows Azure VM(s). This includes features such as SQL Server Integration Services, Analysis Services and Reporting Services. No code migration will be required, and you can run your applications in the cloud or on-premise. Using the new Windows Azure Virtual Network, also announced this month, you will even be able to domain-join your Windows Azure VM running SQL Server to your on-premise domain(s).

This is critical to enabling development of hybrid applications that can span both on-premises and off-premises under a single corporate trust boundary. Also, VM images with SQL Server can be created in the cloud from stock image galleries provided within Windows Azure, or created on-premises from existing deployments and uploaded to Windows Azure. Once deployed, VM images can be moved between on-premises and the cloud with SQL Server License mobility, which is provided for those customers that have licensed SQL Server with Software Assurance (SA).

Windows Azure SQL Database, on the other hand, does not support all SQL Server features. While a very large subset of features are supported (and this set of features is growing over time), it is not full SQL Server Enterprise Edition, and differences will always exist based on different design goals for SQL Database as pointed out above. A guide is available on MSDN that explains the important feature-level differences between SQL Database and SQL Server box product. Even with these differences, however, tools such as SQL Server Management Studio and SQL Server Data Tools can be used with SQL Database as well as SQL Server running on premises and in a Windows Azure VM.

In a nutshell, running SQL Server in a Windows Azure VM will most often be the best route to migrate existing applications and services to Windows Azure given its compatibility with the full SQL Server box product, and for building hybrid applications and services spanning on-premises and the cloud under a single corporate trust boundary. However, for new cloud-based applications and services, SQL Database might be the better choice for reasons discussed further below.

Control vs. Cost

While SQL Server running in a Windows Azure VM will offer the same database features as the box product, SQL Database aims as service to minimize costs and administration overhead. With SQL Database, for example, you do not pay for compute resources in the cloud. Rather, you just pay a consumption fee per database based on the size of the database—from as little as $5.00 per month for a 100MB database, to $228.00 per month for a 150GB database (the current size limit for a single SQL Database database).

And while SQL Server running in a Windows Azure VM will offer the best application compatibility, there are two important features of SQL Database that customers should understand:

  • High Availability (HA) and 99.9% database uptime SLA built-in
  • SQL Database Federation

With SQL Database, high availability is a standard feature at no additional cost. Each time you create a Windows Azure SQL Database, that database is actually operating across a primary node and multiple online replicas, such that if the primary fails, a secondary node automatically replaces it within seconds, with no application downtime. This is how we are able to offer a 99.9% uptime SLA with SQL Database at no additional charge.

For SQL Server in a Windows Azure VM, the virtual machine instance will have an SLA (99.9% uptime) at commercial release. This SLA is for the VM itself, not the SQL Server databases. For database HA, you will be able to configure multiple VMs running SQL Server 2012 and setup an AlwaysOn Availability Group; but this will require some manual configuration and management, and you will pay extra for each secondary you operate—just as you would for an on-premises HA configuration.

With SQL Server running in a Windows Azure VM, not only can you control the operating system and database configuration (since it’s your dedicated VM to configure). But it is up to you to configure and maintain this VM over time, including patching and upgrading the OS and database software over time, as well as installing any additional software such as anti-virus, backup tools, etc. With SQL Database, you are not running in a VM, and have no control over a VM configuration. However, the database software is automatically configured, patched and upgraded by Microsoft in the data centers, so this lowers administration costs.

With SQL Server in a Windows Azure VM, you can also control the size of the VM, providing some level of scale up from smaller compute, storage and memory configurations to larger VM sizes. SQL Database, on the other hand, is designed for a scale-out vs. a scale-up approach to achieving higher throughput rates. This is achieved through a unique feature of SQL Database called Federation. Federation makes it very easy to partition (shard) a single logical database into many physical nodes, providing very high throughput for the most demanding database-driven applications. The SQL Database Federation feature is possible because of the unique PaaS characteristics of SQL Database and its almost friction-free provisioning and automated management. SQL Database Federation is discussed in more detail below.

Database Scale-out Requirements

Another key evaluation criterion for choosing SQL Server running in a Windows Azure VM vs. SQL Database will be performance and scalability. Customers will always get the best vertical scalability (aka ‘scale up’) when running SQL Server on their own hardware, since customers can buy hardware that is highly optimized for performance. With SQL Server running in a Windows Azure VM, performance for a single database will be constrained to the largest virtual machine image possible on Windows Azure—which at its introduction will be a VM with 8 virtual CPUs, 14GB of RAM, 16 TB of storage, and 800 MB/s network bandwidth. Storage will be optimized for performance and configurable by customers. Customers will also be able to configure and run AlwaysOn Availability Groups (at GA, not for preview release), and optionally get additional performance by using read-only secondaries or other scale out mechanisms such as scalable shared databases, peer-to-peer replication, Distributed Partitioned Views, and data-dependent routing.

With SQL Database, on the other hand, customers do not choose how many CPUs or memory: SQL Database operates across shared resources that do not need to be configured by the customer. We strive to balance the resource usage of SQL Database so that no one application continuously dominates any resource. However, this means a single SQL Database is in nature limited in its throughput capabilities, and will be automatically throttled if a specific database is pushed beyond certain resource limits. But via a feature called SQL Database Federation, customers can achieve much greater scalability via native scale-out capabilities. Federation enables a single logical database to be easily portioned into multiple physical nodes.

This native feature in SQL Database makes scale-out much easier to setup and manage. For example, with SQL Database, you can quickly partition a database into a few or even hundreds of nodes, with each node adding to the overall capacity of the data tier (note that applications need to be specifically designed to take advantage of this feature). Partitioning operations are as simple as one line of T-SQL, and the database remains online even during re-partitioning. More information on SQL Database Federation is available here.

Summary

We hope this blog has helped to introduce some of the key differences and similarities between SQL Server running in a Windows Azure VM (IaaS) and Window Azure SQL Database (PaaS). The good news is that in the near future, customers will have a choice between these two models, and the two models can be easily mixed and matched for different types of solutions.

Criteria SQL Server inside Windows Azure VM Windows Azure SQL Database
Time to Solution    
Migrate Existing Apps Fast Moderate
Build New Apps Moderate Fast
Cost of Solution    
Hardware Administration None None
Software Administration (Database & OS) Manual None
Machine High Availability Automated (99.9% Uptime SLA at commercial release) N/A
Database High Availability With extra VMs and manual setup via AlwaysOn (at commercial release), DBM; DR via log shipping, transactional replication Standard Feature (99.9% DB uptime SLA)
     
Cost Medium Low
Scale Model    
Scale-Up X-Large VM
(8 cores, 14GB RAM, up to 16 TB disk space)
Not Supported
Scale-Out Manual via AlwaysOn read-only secondaries, scalable shared databases, peer-to-peer replication, Distributed Partitioned Views, and data-dependent routing (manual to setup, and applications must be designed for these features) SQL Database Federation (automated at data tier, with applications designed for Federation)
Control & Customize    
OS and VM Full Control No Control
SQL Server Database Compatibility, Customization Full support for SQL Server 2012 box product features including database engine, SSIS, SSAS, SSRS Large subset of SQL Server 2012 features
Hybrid    
Domain Join and Windows Authentication Yes Not possible
Data Synchronization via Azure Data Sync Supported Supported
Manageability    
Resource Governance & Security Level SQL Instance/VM Logical DB Server
Tools Support Existing SQL Server tools such as SSMS, System Center, and SSDT Existing SQL Server tools such as SSMS, System Center, and SSDT
Manage at Scale Capabilities Fair Good

Arnd Christian Koenig, Bolin Ding, Surajit Chaudhuri and Vivek Narasayya published A Statistical Approach Towards Robust Progress Estimation recently. From the Introduction:

Accurate estimation of the progress of database queries can be crucial to a number of applications such as administration of longrunning decision support queries. As a consequence, the problem of estimating the progress of SQL queries has received significant attention in recent years [6, 13, 14, 5, 12, 16, 15, 17]. The key requirement for all of these techniques (aside from small overhead
and memory footprint) is their robustness, meaning that the estimators need to be accurate across a wide range of queries, parameters and data distributions.

imageUnfortunately, as was shown in [5], the problem of accurate progress estimation for arbitrary SQL queries is hard in terms of worst-case guarantees: none of the proposed techniques can guarantee any but trivial bounds on the accuracy of the estimation (unless some common SQL operators are not allowed). While the
work of [5] is theoretical and mainly interested in the worst case, the property that no single proposed estimator is robust in general holds in practice as well.
We find that each of the main estimators proposed in the literature performs poorly relative to the alternative estimators for some (types of) queries.

To illustrate this, we compared the estimation errors for 3 major estimators proposed in the literature (DNE [6], the estimator of Luo et al (LUO) [13] and the TGN estimator based on the Total GetNext model [6] tracking the GetNext calls at each node in a query plan) over a number of real-life and benchmark workloads (described in detail in Section 6). We use the average absolute difference between the estimated progress and true progress as the estimator error for each query and the compare the ratio of this error to the minimum error among all three estimators. The results are shown in Figure 1, where the Y-axis shows the ratio and the X-axis iterates over all queries, ordered by ascending ratio for each estimator – note that the Y-axis is in log-scale. As we can see, each estimator is (close to) optimal for a subset of the queries, but also degrades severely (in comparison to the other two), with an error-ratio of 5x or more for a significant fraction of the workload. No single existing estimator performs sufficiently well across the spectrum of queries and data distributions to rely on it exclusively.

However, the relative errors in Figure 1 also suggest that by judiciously selecting the best among the three estimators, we can reduce the progress estimation error. Hence, in absence of a single estimator that is always accurate, an approach that chooses among them could go a long way towards making progress estimation robust.

Unfortunately, there appears to be no straightforward way to precisely state simple conditions under which one estimator outperforms another. While we know that e.g., the TGN estimator is more sensitive to cardinality estimation errors than DNE, but more robust with regards to variance in the number of GetNext calls issued in response to input tuples, neither of these effects be reliably quantified before a query starts execution. Moreover, a large numbers of other factors such as tuple spills due to memory contention, certain optimizations in the processing of nested iterations (see Section 5.1), etc., all impact which progress estimator performs best for a given query.

From Proceedings of the VLDB Endowment, Vol. 5, No. 4. Bolin Ding is at the University of Illinois at UrbanaChampaign; the other three authors are at Microsoft Research.

Progress estimation is becoming more important with increasing adoption of Big Data technologies. Similar work is going on for estimating MapReduce application progress.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

My Big Data in the Cloud article for Visual Studio Magazine asserts “Microsoft has cooked up a feast of value-added big data cloud apps featuring Apache Hadoop, MapReduce, Hive and Pig, as well as free apps and utilities for numerical analysis, publishing data sets, data encryption, uploading files to SQL Azure and blobs.” Here’s the introduction:

Competition is heating up for Platform as a Service (PaaS) providers such as Microsoft Windows Azure, Google App Engine, VMware Cloud Foundry and Salesforce.com Heroku, but cutting compute and storage charges no longer increases PaaS market share. So traditional Infrastructure as a Service (IaaS) vendors, led by Amazon Web Services (AWS) LLC, are encroaching on PaaS providers by adding new features to abstract cloud computing functions that formerly required provisioning by users. For example, AWS introduced Elastic MapReduce (EMR) with Apache Hive for big data analytics in April 2009. In October 2009, Amazon added a Relational Database Services (RDS) beta to its bag of cloud tricks to compete with SQL Azure.

image_thumb11Microsoft finally countered with a multipronged Apache Hadoop on Windows Azure preview in December 2011, aided by Hadoop consultants from Hortonworks Inc., a Yahoo! Inc. spin-off. Microsoft also intends to enter the highly competitive IaaS market; a breakout session at the Microsoft Worldwide Partner Conference 2012 will unveil Windows Azure IaaS for hybrid and public clouds. In late 2011, Microsoft began leveraging its technical depth in business intelligence (BI) and data management with free previews of a wide variety of value-added Software as a Service (SaaS) add-ins for Windows Azure and SQL Azure (see Table 1).

Codename Description Link to Tutorial
“Social Analytics” Summarizes big data from millions of tweets and other unstructured social data provided by the “Social Analytics” Team http://bit.ly/Kluwd1
“Data Transfer” Moves comma-separated-value (SSV) and other structured data to SQL Azure or Windows Azure blobs http://bit.ly/IC1DJp
“Data Hub” Enables data mavens to establish private data markets that run in Windows Azure http://bit.ly/IjRCE0
“Cloud Numerics” Supports developers who use Visual Studio to analyze distributed arrays of numeric data with Windows High-Performance Clusters (HPCs) in the cloud or on premises http://bit.ly/IccY3o
“Data Explorer” Provides a UI to quickly mash up BigData from various sources and publish the mash-up to a Workspace in Windows Azure http://bit.ly/IMaOIN
“Trust Services” Enables programmatically encrypting Windows Azure and SQL Azure data http://bit.ly/IxJfqL
“SQL Azure Security Services” Enables assessing the security state of one or all of the databases on a SQL Azure server. http://bit.ly/IxJ0M8
“Austin” Helps developers process StreamInsight data in Windows Azure  

Table 1. The SQL Azure Labs team and the StreamInsight unit have published no-charge previews of several experimental SaaS apps and utilities for Windows Azure and SQL Azure. The Labs team characterizes these offerings as "concept ideas and prototypes," and states that they are "experiments with no current plans to be included in a product and are not production quality."

image_thumb3_thumb[1]

In this article, I'll describe how the Microsoft Hadoop on Windows Azure project eases big data analytics for data-oriented developers and provide brief summaries of free SaaS previews that aid developers in deploying their apps to public and private clouds. (Only a couple require a fee for the Windows Azure resources they consume.) I'll also include instructions for obtaining invitations for the previews, as well as links to tutorials and source code for some of them. These SaaS previews demonstrate to independent software vendors (ISVs) the ease of migrating conventional, earth-bound apps to SaaS in the Windows Azure cloud.

image_thumb15_thumb[1]This article went to press before the Windows Azure Team’s Meet Azure event on 6/7/2012, where the team unveiled the “Spring Wave” of new features, upgrades and updates to Windows Azure, including Windows Azure Virtual Machines, Virtual Networks, Web Sites and other new and exciting services. Also, the team terminated Codenames “Social Analytics” and “Data Transfer” projects in late June. However, as of 6/27/2012, the “Social Analytics” data stream from the Windows Azure Marketplace Data Market was still operational, so the downloadable C# code for the Microsoft Codename “Social Analytics” Windows Form Client still works.

Note: I modified my working version of the project to copy the data from about a million rows in the DataGridView to a DataGrid.csv file, which can be loaded on demand. Copies of this file and the associated source file for the client’s chart are available from my SkyDrive account. I will update the sample code to use the DataGrid.csv file if the Data Market stream becomes unavailable.

I updated my Visual Studio Magazine Article Retrospective list of cover stories with this latest piece.


Steve Fox (@redmondhockey) described SharePoint Online & Windows Azure: Building Hybrid Applications in a 6/28/2012 post:

imageHave been spending time here at TechEd EMEA and one of the topics I presented on this week was how you can built hybrid applications using SharePoint Online and Windows Azure. I think there’s an incredible amount of power here for building cloud apps; it represents a great cloud story and one that complements the O365 SAAS capabilities very well.

I’ve not seen a universally agreed-upon definition of hybrid, so in the talk we started by defining a hybrid application as follows:

  • imageSharePoint Online + Data, Code, Logic elsewhere + Remote Clients/Devices

Within this frame, I then discussed four hybrid scenarios that enable you to connect to SharePoint Online (SPO) in some hybrid way. These scenarios were:

  1. Leveraging Windows Azure SQL Data Sync to synchronize on-premises SQL Server data with Azure SQL Database. With this mechanism, you can then sync your data from on-premises to the cloud and then consume using a WCF service and BCS within SPO, or wrap the data in a REST call and project to a device.
  2. image_thumb15_thumb1Service-mediated applications, where you can connect cloud-to-cloud systems (in this case I used an example with Windows Azure Data Marketplace) or on-premises-to-cloud systems (where I showed an on-premises LOB example to SPO example). Here, we discussed the WCF, REST, and Service Bus—endpoints and transport vehicles for data/messages.
  3. Cloud and Device apps, which is where you can take a RESTified service around your data and expose it to a device (in this case a WP7 app).
  4. Windows Azure SP Instance on the new Virtual Machine (IAAS) to show how you can pull on-premises data using the Service Bus and interact with PAAS applications built using WCF and Windows Azure and expose those in SP.

You can view the deck for the session below. (It’s not up now, but you should be able to You can view the session here on Channel 9 soon.)

The areas for discussion represented four patterns for discussion around how you can integrate cloud and on-premises systems to build some really interesting hybrid applications—and then leverage the collaborative power of SPO.

Some things we discussed during the session that are worth calling out here:

  • In many cases, when building hybrid cloud apps that integrate with SPO, you’ll be leveraging some type of ‘service.’ This could be WCF, REST, or Web API. Each has its own merits and challenges. If you’re like me and don’t like spending time debugging XML config files, then I would recommend you take a look at the new Web API option for building services. It uses the MVC method and you can use the Azure SDK to build Mobile apps as well as vanilla Web API apps.
  • I’ve seen some discussion on the JSONP method when issuing cross-domain calls for services. I would argue this is okay for endpoints/domains you trust; however, always take care when leveraging methods that are injecting script into your page—this allows for malicious code to be run. And given you’re executing code on the client, malicious code could be run that pooches your page—imagine a hack that attempts to use the SPCOM to do something malicious to your SPO instance. Setting header formatting in your service code can also be a chore.
  • Cross-origin resource sharing (CORS) is an area I’m looking into as a more browser-supported method of cross-domain calls. This enables you to specify or set a wildcard (“*”) flag and pass back to the browser to accept the cross-domain call.
  • JSON is increasingly being used in building web services, so ensure you’re up to speed with what jQuery has to offer. Lots of great plug-ins, plus you then have a leg up when looking at building apps through, say, jQuery for mobile.

All in all, there’s a ton of options available for you when building SPO apps, and I believe that MS has a great story here for building compelling cloud applications.

For more information and resources re the above, check out:


Mark Stafford (@markdstafford) posted OData 101: Building our first OData consumer on 6/27/2012:

imageIn this OData 101, we will build a trivial OData consumption app that displays some titles from the Netflix OData feed along with some of the information that corresponds to those titles. Along the way, we will learn about:

  • imageAdding service references and how adding a reference to an OData service is different in Visual Studio 2012
  • NuGet package management basics
  • The LINQ provider in the WCF Data Services client
Getting Started

Let’s get started!

First we need to create a new solution in Visual Studio 2012. I’ll just create a simple C# Console Application:

image

From the Solution Explorer, right-click the project or the References node in the project and select Add Service Reference:

image

This will bring up the Add Service Reference dialog. Paste http://odata.netflix.com/Catalog in the Address textbox, click Go and then replace the contents of the Namespace textbox with Netflix:

image

Notice that the service is recognized as a WCF Data Service (see the message in the Operations pane).

Managing NuGet Packages

Now for the exciting part: if you check the installed NuGet packages (right-click the project in Solution Explorer, choose Manage NuGet Packages, and select Installed from the left nav), you’ll see that the Add Service Reference wizard also added a reference to the Microsoft.Data.Services.Client NuGet package!

This is new behavior in Visual Studio 2012. Any time you use the Add Service Reference wizard or create a WCF Data Service from an item template, references to the WCF Data Services NuGet packages will be added for you. This means that you can update to the most recent version of WCF Data Services very easily!

image

NuGet is a package management system that makes it very easy to pull in dependencies on various libraries. For instance, I can easily update the packages added by ASR (the 5.0.0.50403 versions) to the most recent version by clicking on Updates on the left or issuing the Update-Package command in the Package Manager Console:

image

NuGet has a number of powerful management commands. If you aren’t familiar with NuGet yet, I’d recommend that you browse their documentation. Some of the most important commands are:

LINQ Provider

Last but not least, let’s write the code for our simple application. What we want to do is select some of the information about a few titles.

The WCF Data Services client includes a powerful LINQ provider for working with OData services. Below is a simple example of a LINQ query against the Netflix OData service.

  1. using System;
  2. using System.Linq;

  3. namespace OData101.BuildingOurFirstODataConsumer
  4. {
  5. internal class Program
  6. {
  7. private static void Main()
  8. {
  9. var context = new Netflix.NetflixCatalog(new Uri("http://odata.netflix.com/Catalog"));

  10. var titles = context.Titles
  11. .Where(t => t.Name.StartsWith("St") && t.Synopsis.Contains("of the"))
  12. .OrderByDescending(t => t.AverageRating)
  13. .Take(10)
  14. .Select(t => new { t.Name, t.Rating, t.AverageRating });

  15. Console.WriteLine(titles.ToString());

  16. foreach (var title in titles)
  17. {
  18. Console.WriteLine("{0} ({1}) was rated {2}", title.Name, title.Rating, title.AverageRating);
  19. }
  20. }
  21. }
  22. }

In this sample, we start with all of the titles, filter them down using a compound where clause, order the results, take the top ten, and create a projection that returns only portions of those records. Then we write titles.ToString() to the console, which outputs the URL used to query the OData service. Finally, we iterate the actual results and print relevant data to the console:

image

Summary

Here’s what we learned in this post:

  • It’s very easy to use the Add Service Reference wizard to add a reference to an OData service
  • In Visual Studio 2012, the Add Service Reference wizard and the item template for a WCF Data Service add references to our NuGet packages
  • Shifting our distribution vehicle to NuGet allows people to easily update their version of WCF Data Services simply by using the Update-Package NuGet command
  • The WCF Data Services client includes a powerful LINQ provider that makes it easy to compose OData queries

Sample source is attached; I’d encourage you to try it out!


<Return to section navigation list>

Windows Azure Service Bus, Active Directory and Workflow

Ram Jeyaraman reported Windows Azure SDK for PHP available, including support for Service Bus in a 6/29/2012 post to the Interoperability @ Microsoft blog:

imageGood news for all you PHP developers out there: I am happy to share with you the availability of Windows Azure SDK for PHP, which provides PHP-based access to the functionality exposed via the REST API in Windows Azure Service Bus. The SDK is available as open source and you can download it here.

This is an early step as we continue to make Windows Azure a great cloud platform for many languages, including .NET, Java, and PHP. If you’re using Windows Azure Service Bus from PHP, please let us know your feedback on how this SDK is working for you and how we can improve them. Your feedback is very important to us!

You may refer to Windows Azure PHP Developer Center for related information.

Openness and interoperability are important to Microsoft, our customers, partners, and developers. We believe this SDK will enable PHP applications to more easily connect to Windows Azure making it easier for applications written on any platform to interoperate with one another through Windows Azure.

Thanks,
Ram Jeyaraman
Senior Program Manager
Microsoft Open Technologies, Inc.


Vittorio Bertocci (@vibronet) reported The Recording of “A Lap Around Windows Azure Active Directory” From TechEd Europe is Live in a 6/27/2012 post:

imageHi all! I am typing this post from Shiphol, Amsterdam’s airport, where I am waiting to fly back to Seattle after a super-intense 2 days at TechEd Europe.

As it is by now tradition for Microsoft’s big events, the video recording of the breakout sessions are available on Channel9 within 24 hours from the delivery. Yesterday I presented “A Lap Around Active Directory”, and the recording punctually just popped up: check it out!

The fact that the internet connectivity was down for most of the talk is unfortunate, although I am told it made for a good comic relief. Sorry about that, guys!

imageLuckily I managed to go through the main demo, which is what I needed for making my point. The other demos I planned were more of a nice to have.

I wanted to query the Directory Graph from Fiddler or the RestClient Firefox plugin, to show how incredibly easy it is to connect with the directory and navigate relationships: however I did have the backup slide showing a prototypical query and results in JSON, albeit less spectacular it hopefully conveyed the point.

image

The other thing I wanted to show you was a couple of projects which demonstrate web sign on with the directory from PHP and Java apps: given that I had those running on a remote machine (my laptop’s SSD does not have all that room) the absence of connectivity killed the demo from the start; once again, though, those would have demonstrated the same SSO feature I have shown with the expense reporting app; and I would not have been able to show the differences in code anyway, given that the session was a 200. So, all in all a lot of drama but not a lot of damage after all.

Thanks again for having shown up at the session, and for all the interesting feedback at the book signing. Windows Azure Active Directory is a Big Deal, and I am honored to have had the chance to be among the first to introduce it to you. The developer preview will come out real soon, and I can’t wait to see what you will achieve with it!


Mary Jo Foley (@maryjofoley) asserted “A soon-to-be-delivered preview of a Windows Azure Active Directory update will include integration with Google and Facebook identity providers” in a summary of her With Azure Active Directory, Microsoft wants to be the meta ID hub post of 6/25/2012 for ZDNet’s All About Microsoft blog:

imageMicrosoft isn’t just reimaginging Windows and reimaginging tablets. It’s also reimaginging Active Directory in the form of the recently (officially) unveiled Windows Azure Active Directory (WAAD).

In a June 19 blog post that largely got lost among the Microsoft Surface shuffle last week, Microsoft Technical Fellow John Shewchuk delivered the promised Part 2 of Microsoft’s overall vision for WAAD.

imageWAAD is the cloud complement to Microsoft’s Active Directory directory service. Here’s more about Microsoft’s thinking about WAAD, based on the first of Shewchuk’s posts. It already is being used by Office 365, Windows InTune and Windows Azure. Microsoft’s goal is to convince non-Microsoft businesses and product teams to use WAAD, too.

This is how the identity-management world looks today, in the WAAD team’s view:

And this is the ideal and brave new world they want to see, going forward.


WAAD is the center of the universe in this scenario (something with which some of Microsoft’s competitors unsurprisingly have problem).

How is Microsoft proposing to go from A to B? Shewchuk explains:

“(W)e currently support WS-Federation to enable SSO (single sign-on) between the application and the directory. We also see the SAML/P, OAuth 2, and OpenID Connect protocols as a strategic focus and will be increasing support for these protocols. Because integration with applications occurs over standard protocols, this SSO capability is available to any application running on any technology stack…

“Because Windows Azure Active Directory integrates with both consumer-focused and enterprise-focused identity providers, developers can easily support many new scenarios—such as managing customer or partner access to information—all using the same Active Directory–based approach that traditionally has been used for organizations’ internal identities.”

Microsoft execs are sharing more information and conducting sessions about WAAD at TechEd Europe, which kicks off on June 25 in Amsterdam.

Microsoft announced the developer preview for WAAD on June 7. This preview includes two capabilities that are not currently in WAAD as it exists today, Shewchuk noted. The two: 1. The ability to connect and use information in the directory through a REST interface; 2. The ability for third-party developers to connect to the SSO the way Microsoft’s own apps do.

The preview also will “include support for integration with consumer-oriented Internet identity providers such as Google and Facebook, and the ability to support Active Directory in deployments that span the cloud and enterprise through synchronization technology,” he blogged.

Shewchuk said the WAAD developer preview should be available “soon.”


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Brian Swan (@brian_swan) posted Windows Azure Websites, Web Roles, and VMs: When to use which? on 6/27/2012 to the [Windows Azure’s] Silver Lining blog:

imageThe June 7th update to Windows Azure introduced two new services (Widows Azure Websites and persistent VMs) that beg the question “When should I use a Windows Azure Website vs. a Web Role vs. a VM?” That’s exactly the question I’ll try to help you answer in this post. (I say “help you answer” because there is no simple, clear-cut answer in all cases. What I’ll try to do here is give you enough information to help you make an informed decision.)

imageThe following table should give you some idea of what each option is ideal for:

image

Actually, I think the use cases for VMs are wide open. You can use them for just about anything you could imagine using a VM for. The tougher distinction (and decision) is between Web Sites and Web Roles. The following table should give you some idea of what Windows Azure features are available in Web Sites and Web Roles:

image

* Web or Worker Roles can integrate MySQL-as-a-service through ClearDB's offerings, but not as part of the Management Portal workflow.

As I said earlier, it’s impossible to provide a definitive answer to the question of which option you should use (Web Sites, Web Roles, or VMs). It really does depend on your application. With that said, I hope the information in the tables above helps you decide what is right for your application. Of course, if you have any questions and/or feedback, let us know in the comments.


Avkash Chauhan (@avkashchauhan) described Deploying Windows Azure Web Sites using Visual Studio Web Publish Wizard in a 6/26/2012 post:

imageCreate your Windows Azure Websites (shared or reserved) and get the publish profile by selecting “Download publish profile” option as shown below:

imageOnce your PublishSettings file name as _yourwebsitename_.azurewebsites.net.PublishSettings is download, you can import it in your Visual Studio Publish Web Application wizard as below:

Select _yourwebsitename_.azurewebsites.net.PublishSettings

imageOnce PublishSettings is loaded, all the fields in your web publishing wizard will be filled automatically using the info in PublishSettings file.

You can test your connection by “validate connection” option as well for correctness.

Finally you can start publish by selecting “Publish” option and in your VS2010 output window you will see the publish log as below:

------ Build started: Project: LittleWorld, Configuration: Release Any CPU ------
LittleWorld -> C:\2012Data\Development\Azure\LittleWorld\LittleWorld\bin\LittleWorld.dll
------ Publish started: Project: LittleWorld, Configuration: Release Any CPU ------
Transformed Web.config using C:\2012Data\Development\Azure\LittleWorld\LittleWorld\\Web.Release.config into obj\Release\TransformWebConfig\transformed\Web.config.
Auto ConnectionString Transformed Account\Web.config into obj\Release\CSAutoParameterize\transformed\Account\Web.config.
Auto ConnectionString Transformed obj\Release\TransformWebConfig\transformed\Web.config into obj\Release\CSAutoParameterize\transformed\Web.config.
Copying all files to temporary location below for package/publish:
obj\Release\Package\PackageTmp.
Start Web Deploy Publish the Application/package to
https://waws-prod-blu-001.publish.azurewebsites.windows.net/msdeploy.axd?site=littleworld ...
Updating setAcl (littleworld).
Updating setAcl (littleworld).
Updating filePath (littleworld\bin\LittleWorld.dll).
Updating setAcl (littleworld).
Updating setAcl (littleworld).
Publish is successfully deployed.
Site was published successfully
http://littleworld.azurewebsites.net/
========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========
========== Publish: 1 succeeded, 0 failed, 0 skipped ==========

And your website will be ready to Rock:


Michael Collier (@MichaelCollier) showed Windows Azure Web Sites – Using WebDeploy without the New Tools in a 6/27/2012 post:

imageWith the new Windows Azure Web Sites it is very easily to use your favorite deployment tool/technology to deploy the web solution to Windows Azure. You can choose from continuous delivery with Git or TFS, or use tools like FTP or WebDeploy.

imageAs long as you have at least the June 2012 Windows Azure tools update for Visual Studio 2010 (or 2012), using WebDeploy to deploy a Windows Azure Web Site is really easy. Simply import the .publishsettings file, Visual Studio reads the pertinent data and populates the WebDeploy wizard. If you’re not familiar with this process, please have a look at https://www.windowsazure.com/en-us/develop/net/tutorials/web-site-with-sql-database/#deploytowindowsazure as the process is explained very well there, and even has nice pictures.

But what if you don’t have the latest tools update? Or, what if you don’t have any Windows Azure tools installed? After all, why should you have to install Windows Azure tools to use WebDeploy?

You can surely use WebDeploy to deploy a web app to Windows Azure Web Sites – you just have to do a little more manual configuration. You do what Visual Studio does for you as part of the latest Windows Azure tools update.

How to Deploy via WebDeploy without the Windows Azure Tools

  1. Download the .publishsettings file for the target Windows Azure Web Site.
  2. Launch the WebDeploy wizard in Visual Studio.
  3. Open the .publishsettings file in Notepad or your favorite text editor. You’ll need to copy a few settings out of this .publishsettings file and paste them into the WebDeploy wizard.
        1. publishUrl
        2. msdeploySite
        3. userName
        4. userPWD

    image

  4. Back in WebDeploy, update the following settings:

image

In the end, your WebDeploy wizard dialog should look like the following:

SNAGHTMLc2c0cdb

Hit the “Publish” button and WebDeploy should quickly publish to Windows Azure Web Sites.

To get started with Windows Azure Web Sites, if you don’t already have Windows Azure, sign up for a FREE Windows Azure 90-day trial account. To start using Windows Azure Web Sites, request access on the ‘Preview Features’ page under the ‘account’ tab, after you log into your Windows Azure account.


Brian Loesgen (@BrianLoesgen) explained Cloning Azure Virtual Machines in a 6/26/2012 post:

imageAzure Virtual Machines are still new to everyone, and I got a great question from a partner a few days ago: “I have an Azure Virtual Machine set up just the way I want it, now I want to spin up multiple instances, how do I do that?”

imageIn the “picture is worth 1,000 words” category (and I don’t have time to write 1,000 words), please see the following sequence of screen shots for the answer.

Things to note:

  • run sysprep (%windir%\system32\sysprep) with “generalize” so each machine will have a unique SID (security ID)
  • when you “capture” the VM, it will be deleted. You can re-create it from the image as shown below

clip_image001

clip_image001[7]

clip_image001[9]

clip_image001[11]


Shan MacArthur described a more complex process for Cloning Windows Azure Virtual Machines in a 6/28/2012 post:

imageIn one of my previous blog articles, I demonstrated how to build a demonstration or development environment for Microsoft Dynamics CRM 2011 using Windows Azure Virtual Machine technology. Once you get a basic virtual machine installed, you will likely want to back it up, or clone it. This article will show you how to manage your virtual machines once you get them set up.

Background

imageBefore I go into some of the details, I want to give a little background on how Windows Azure manages hard disk images that it uses for virtual machines. When you first create a virtual machine, you start with an image, and that image can be one that you have 'captured', or it can be one from the Azure gallery of images. Microsoft provides basic installs of Windows Server 2008 R2, various Linux distros and even Windows Server 2012 Release Candidate. These base images are basically unattended installs that Microsoft (or you) maintain. When you create a new virtual machine, the Azure fabric controller will start the machine up in provisioning mode, which will allow Azure to specify the password for your virtual machine. The machine will initialize itself when it boots for the first time. The new virtual machine will have a single 30GB hard drive that is attached as the C: drive and used for the system installation, as well as D: drive that is used for the swap file and temporary storage.

One of the new features that makes Azure Virtual Machines possible is that the hard drive is now durable, and they do this by storing the hard drive blocks in Azure blobs. This means that your hard drive now can benefit from the redundancy and durability that is baked into the Azure blob storage infrastructure, including multiple geo-distributed replica copies. The downside is that you are dealing with blob storage which is not quite as fast as a physical disk. The operating system volume (C: drive) is stored in Azure blobs, but the temporary volume (D: drive) is not stored in blob storage. As such, the D: drive is a little faster, but it is not durable and should not be used to install applications or their permanent data on it.

You can create additional drives in Windows Azure and attach them to your Azure virtual machine. The number of drives that can be attached to a virtual machine is determined by the virtual machine size. For most real-world installations, you are going to want to create an additional data drive and attach it to your Azure virtual machine. Keep in mind that your C: drive is only 30GB and it will fill up when Windows applies updates or other middleware components get installed. If you have a choice of where to install any application, choose your additional permanent data drive over the system drive whenever possible. …

Shan continues with a detailed, illustrated description of the cloning process.

Following are links to Shan’s two earlier posts on related Virtual Machine topics (missed when posted):


Avkash Chauhan (@avkashchauhan) described Working with Yum on CentOS in a 6/26/2012 post:

imageYum (YELLOWDOG UPDATE MANAGER)
Searching/Listing package in List of packages:

List everything available to install from , a list of all packages

  • # yum list
  • # yum list | grep openssl

imageExample: list all packages starts with python

  • # yum list python*

Example: List of package names as openssl Install:

  • #yum list openssl

List all available versions of a package:

  • # yum –showduplicates list php
    • Available Packages php.x86_64 5.3.3-3.el6_1.3 base php.x86_64 5.3.3-3.el6_2.5 updates php.x86_64 5.3.3-3.el6_2.6 updates php.x86_64 5.3.3-3.el6_2.8 updates
  • # yum –showduplicates list python
    • Installed Packages python.x86_64 2.6.6-29.el6_2.2 @updates Available Packages python.i686 2.6.6-29.el6 base python.x86_64 2.6.6-29.el6 base python.i686 2.6.6-29.el6_2.2 updates python.x86_64 2.6.6-29.el6_2.2 updates

Searching into your Linux Box

Search a specific package in all installed packages in the box

  • # yum search <packagename>

For example search for openssl

  • # yum search openssl

Check if this file belongs which package

  • # yum provides nodejs
  • # yum provides */ssl

Find out the all locations where the package is installed:

  • # yum provides */nodejs

List the dependencies of a specific package

  • # yum deplist nodejs

Installing a package in your Linux Box:

Install Package in your machine:

  • # yum install <package_name>

Install a collection of Packages “group package” in your machine:

  • # yum groupinstall <group package name>

Updating a package in your Linux Box:

To update the list of YUM packages to the latest edition. This may take some time to get all the stuff updated.

  • # yum update
  • #yum groupupdate

Remove an installed package:

  • # yum remove <package name>
  • #yum groupremove <group package name>

Downgrade a package:

  • # yum install yum-allowdowngrade

List of software groups available to install

  • # yum grouplist
  • # yum grouplist | grep gn

List of available repolist for yum

  • #yum repolist all

Clean the package install cache:

  • # yum clean all

Other Yum Commands:

• yum-aliases
• yum-allowdowngrade
• yum-arch
• yum-basearchonly
• yum-changelog
• yum-cron
• yum-downloadonly
• yumex.noarch
• yum-fastestmirror
• yum-filter-data
• yum-noarch
• yum-kernel-module
• yum-kmod
• yum-list-data
• yum-merge-conf
• yum-metadata-parser
• yum-priorities
• yum-protectbase
• yum-protect-packages
• yum-refresh-updatesd
• yum-security
• yum-skip-broken
• yum-tsflags
• yum-updateonboot
• yum-updatesd
• yum-upgrade-helper
• yum-utils
• yum-versionlock


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Himanshu Singh (@himanshuks, pictured below) added a Guest Post: Building Apps for Windows Azure Just Got Better With Cloud9 to the Windows Azure blog on 6/29/2012:

imageEditor’s Note: Today’s guest blog post comes from Matthew Pardee, Developer Evangelist at Cloud9 IDE, which is an online platform for development, where all the code is open-source, free to adapt and use for everyone, anywhere, anytime.

Running production code on different computers and software stacks is a pain. Worse, it can be a huge distraction for developers where they would otherwise focus on a singular goal.
Yet many teams still work this way. We suffer through the long periods of managing configurations and reconciling platform differences so everyone on the team can start doing... what exactly? Oh yeah, what they do best: code.

imageThere is an opportunity beyond just having the same IDE that everyone uses to get work done. And that opportunity is where Cloud9 has maintained its focus, and where it continues to innovate. It is the potential of a workflow that exists entirely in the cloud.

Cloud9 is the only cloud-based development platform that offers Windows Azure and Windows Azure Web Sites integration. Here are the features Cloud9 released this week that support this vision and makes developing applications for Windows Azure even easier:

Collaboration

Now developers around the world can edit the same code and chat together in real-time. Think of how productive pair programming and code reviews will be, how much more effective presentations are when an audience is truly involved. And how rewarding it is to teach a group of students the art of programming.

Your Workspace in the Cloud

This is the feature that powers every project with its own runtime environment. And it’s the platform professional developers have been waiting for. Now you can compile with gcc. Run the Python and Ruby interpreters. And remember those platform differences you had to maintain for every member of your team? Now when your team works, they are all running on the same OS and software stack. Premium accounts get a full-blooded terminal to interact with their server like they would their local system.

Sync Locally, Work Offline

Cloud9 IDE is now installable as a small app for your desktop - but it’s much more than a desktop IDE. You can keep your hosted projects synced locally so you can keep developing, even when offline. And those desktop projects you had before using Cloud9? They can be pushed to c9.io, giving them all the power and freedom of coding and collaborating in the cloud.

Code Completion

The depth and sophistication of JavaScript analysis - once thought the domain of desktop IDEs - is now on Cloud9. As you type, code suggestions appear below your code. Plus hovering over suggestions shows helpful JavaScript and Node.js documentation. Type Ctrl-Shift-E or Cmd-Shift-E to open the outline view, and quickly navigate to methods in the active file.

Refined Tooling

These features are in addition to the finessed tooling that Cloud9 has been refining for the past year: extremely quick file access, robust search-in-files, in-browser debugger for Node.js, a capable console for running SCM and IDE commands, a rich and full-featured editor, and a beautiful UI. And there is a lot more Cloud9 has to offer.

Deployment: See your Code Come to Life in The Cloud

Cloud9 has been in lockstep with Windows Azure since January of this year when we released support for Windows Azure at Node Summit. And we worked early with Microsoft to integrate Windows Azure Web Sites, unveiling support for the platform right out the gate.

Windows Azure leads the Platform-as-a-Service (PaaS) field in SLA and latency response. They have data centers in America, Europe and Asia, and they offer an integrated suite of service offerings that make application development on their platform a natural choice for any application.

With this release Cloud9 is introducing a new way of getting work done. We are excited to get your feedback as you try these features, and more, at c9.io.

- By Mathew Pardee, Developer Evangelist, Cloud9


David Makogon (@dmakogon) reported ISV Guest Post Series: Linx Powers its Point-of-Sale Systems with Windows Azure in a 6/27/2012 post to the Windows Azure blog:

imageEditor’s Note: Today’s post, written by Linx e-Commerce Program Manager Fernando Chaves [pictured at right], describes how the company uses Windows Azure to scale out its LinxWeb Point-of-Sale system for its customers.

imageLinx is a 26 year-old ISV, and leader of ERP technologies to the retail market in Latin America. We have more than 7,500 customers in Brazil, Latin America and Europe, with more than 60,000 installed Point of Sales (POS) systems. Our company has more than 1,800 employees at our headquarters and branches, and a network of partners spread throughout Brazil and abroad.

imageLinxWeb is a white-label B2C e-Commerce solution that our customers can use as a new POS system in their sales environment. It's integrated with customer on-premises ERP environments, and can be managed just like a traditional POS system while allowing specific customizations such as promotions.

Setting the Stage: Before Windows Azure

Before migrating to Windows Azure, LinxWeb operated on virtual machines (VMs) running in a traditional hosting provider. Though, in theory, this kind of deployment could scale out, it was not easy and fast to achieve and often we needed to scale-up, adding more memory, computing power or network bandwidth to the VM.

LinxWeb was originally single-tenant where every customer had his or her own deployment and environment. Customization was done directly on the customer’s web content files, which could lead to security issues, quality control issues and generation of excessive support requests due to customization errors.

Before Windows Azure migration, the web site was responsible for every processing task: generating product image thumbnails, sending e-mails, and communicating with third-party systems. Every task was done synchronously, impacting e-commerce web site performance and availability to end customers.

The Migration to Windows Azure

When we decided to migrate LinxWeb to Windows Azure, some refactoring was needed, to make it compatible with the stateless nature of Windows Azure web roles and load balancer.

Since each web request could be sent to any web server instance, we needed to externalize session data. We chose Windows Azure SQL Database for our session storage.

We had to remove all file writing to the local disk, since local disk storage isn’t shared between server instances. Additionally, local disk is not durable, unlike Blob Storage or SQL Database, which have replicated disks. Local disks are designed for speed and temporary usage, rather than permanent storage.

Media content, initially saved in SQL Server (in BLOB columns), now are stored in Windows Azure Blob storage, allowing better scalability for the website, since blob content can be cached on Windows Azure Content Delivery Network (CDN) edge cache. Also, by storing only a blob reference in the SQL Database, rather than an entire media object, we are able to keep the size of our SQL Database much smaller, helping us avoid storage limits on individual SQL databases (supported up to 150GB at the time).

Since blobs (and CDN) are referenced with a URL, browser requests for media now go directly to CDN, bypassing our web role instances (and taking load off IIS and database). This modification presented an average 75% reduction on database size and also saved money on storage costs, since blob storage is much cheaper than SQL Database. We also saw response time improve on our Web Role instances, since considerable load was taken off of these servers.

To better use the environment resources, the Windows Azure version was made with multi-tenancy in mind, where multiple customers share compute resources, reducing hosting costs. Understanding some customers may want an isolated environment, we also have a premium offer where a customer receives a dedicated deployment. On this new version, the customer doesn’t update ASP.NET pages directly to change the site layout and look and feel any more. They are allowed to change templates stored in Windows Azure Blob storage, and then the ASP.NET pages process those templates to render updated html to the end user.

Worker roles were used to handle background tasks such as generating picture thumbnails and sending e-mails. Those tasks are queue-driven, using Windows Azure Queues. The worker roles are also responsible for running scheduled tasks, mainly for communication with third party systems. To manage the time handling, we used the Quartz.Net framework, which has the option to run synchronized on multiple worker role instances. This is a very important point: If a scheduler is set up to run in a worker role, that scheduler runs in all instances. Quartz.Net ensures that only one scheduler instance runs at any given time.

Some customers also may want to host a company website or a blog together with their e-commerce site. To solve this need, we use WordPress as our blog engine. WordPress is PHP-based and, by default, the PHP runtime libraries are not installed in Windows Azure web or worker roles. Since our WordPress blog runs on Windows Azure web roles, we needed to install the required PHP components as well as WordPress itself. We did this with startup tasks and Web Platform Installer Command Line to setup the PHP runtime on IIS. A Windows Azure SQL Database is used as persistent storage, as well as Windows Azure Blob storage, so we also installed the Windows Azure Storage plugin for WordPress, which uploads files from users directly to blob storage.

Conclusion

For us, the main benefit of migrate our solution to Windows Azure is how easy and fast it is to scale out the application. This lets us focus on business needs from our customers and support marketing campaigns handling a large number of requests from our end users.

For our customers, a big benefit is that they no longer need to worry about infrastructure and operational system management.

As pointed out, we had a few technical challenges to solve, none of them insurmountable:

  • Moving from single- to multi-tenancy
  • Moving local storage and SQL storage to blob storage and CDN
  • Scheduling tasks with Quartz.net across multiple role instances
  • Installing PHP runtime and WordPress
  • Refactoring web request handling to be stateless and scalable across multiple instances

We were able to handle all of these challenges and now have a very efficient application running in Windows Azure!


Brian Loesgen (@BrianLoesgen) posted a Video Case Study: MOC1 Solutions Brings Wireless Service Advisor To Windows Azure, Streamlines Auto Service Experience on 6/27/2012:

imageThis is the third in a series of video case studies I am doing with some of the ISVs I work with.

The video is available here. Enjoy!


imageWhen MOC1 Solutions wanted to move their applications supporting automotive dealerships to the cloud, they chose Windows Azure.

In this video, Software Development Manager Alex Hatzopoulos and Architect Greg Cannon speak with Microsoft Principal Architect Evangelist Brian Loesgen. In this wide ranging conversation, they cover their experiences in ramping up their team, setting up their environments, and share other first-hand application migration experience gained while moving their flagship Wireless Service Advisor™ (WSA™) product to Windows Azure.

imageWSA uses wireless and mobile technologies to streamline and standardize the Repair Order (RO) write-up process. WSA enables a service advisor to greet customers at their vehicle when they arrive at the dealership service department. Using a tablet PC, the service advisor scans or hand-writes the Vehicle Identification Number (VIN) or license plate number and transmits the information to multiple databases to retrieve critical customer and vehicle data related to that particular vehicle identifier – the critical data includes repair history, recommended services, warranty and recall information, and customer contact information.

Additionally, the WSA allows the service advisor to complete a full inspection process, handle customer's questions, and provide maintenance recommendations in a timely and interactive fashion, all while standing at the customer's vehicle. The customer can provide service authorization by signing the RO on the tablet PC so they can avoid having to wait for a printed copy. The WSA also allows for the preparation of a printed repair order as well as the update of the DMS database. The WSA presents a user-friendly front-end application that both effectively represents the entire Repair Order write up process and efficiently standardizes the Repair Order write-up procedure. The WSA™ accomplishes all this via an Azure-based backend.

About MOC1 Solutions

Based in in Glendora, CA MOC1 Solutions is a traditional ISV that was founded in 2005 and was incubated in MOC Products until June 2006 when the company was spun out as an independent private entity. MOC1 offers software applications used by automotive dealership service departments and vehicle service facilities.

Tim Huckaby (@TimHuckaby) interviews Steve Fox (@redmondhockey, pictured below) in a Bytes by MSDN video of 6/26/2012:

imageJoin Tim Huckaby, Founder of InterKnowlogy and Actus Interactive Software, and Steve Fox, Director of Global Windows Azure Center of Excellence, as they discuss trends in big data, cloud and devices. Steve unveils his thoughts on the practical side of the cloud as well as some interesting stories about emerging cloud uses. Great Interview!

imageSteve Fox has worked at Microsoft for 12 years across a number of different technologies including natural language, search, social computing, and more recently Office, SharePoint and Windows Azure development. He is a Director in MCS and regularly speaks to many different audiences about building applications on Microsoft technology, with a specific focus on the cloud. He has spoken at several conferences, contributed to technical publications, and co-wrote a number of books including Beginning SharePoint 2010 Development (Wrox), Developing SharePoint Solutions using Windows Azure (MSPress), and the forthcoming Professional SharePoint 2010 Cloud-Based Solutions.


Larry Franks (@larry_franks) described Windows Azure PowerShell and Ruby Cloud Services in a 6/26/2012 post:

imageSome of you may remember a little project I created last year called RubyRole, which let you host Ruby applications as a Windows Azure hosted service (now called a cloud service.) I updated it the other day to fix a few issues with the recent spring update to Windows Azure, and discovered some interesting things with the new Windows Azure PowerShell.

Windows Azure PowerShell and Ruby

imageThe Windows Azure PowerShell is included with the Windows version of the Windows Azure Node.js SDK and PHP SDK, but I've discovered that it's generically useful for deploying and managing cloud services like RubyRole (or really anything that's a cloud service, such as Rob Blackwell's AzureRunme.) The trick is that the project has to have updated ServiceConfiguration and ServiceDefinition files for the spring update, and that there has to be a deploymentSettings.json file in the root of the project.

Emulation

With the Other SDK, I had to use run.cmd to launch the application in the emulator. It worked, but it didn't have a stop and you had to be in the right directory when you ran the command, etc. Windows Azure PowerShell provides the following cmdlets for working with the emulator:

  • Start-AzureEmulator - Starts the emulator. The optional -launch switch will launch your browser to the URL of the application in the emulator once it's started.

  • Stop-AzureEmulator - Stops the emulator.

Importing Subscription Info

While the emulator commands work out of the box, the rest of the commands I talk about below require some information about your subscription. This is a pretty painless process, and you just have to do it once. Here's the steps:

  1. Download a .publishsettings file that contains your subscription information plus a management certificate that lets you manage your Windows Azure services from the command line. You can do this using the Get-AzurePublishSettingsFile cmdlet.

    This will launch the browser and prompt you to login, then download the .publishsettings file. Save this file somewhere secure, as it contains information that can be used to access and manage your subscription.

  2. Import the .publishsettingsfile by using the Import-AzurePublishSettingsFile cmdlet as follows:

    Import-AzurePublishSettingsFile <path-to-file>

    This will import the information contained in the file and store it in the .azure directory under your user directory.

    Note: You should delete the .publishsettings file after the import, as anyone can import it and use it to manage your subscription.

That's it. Once you do those steps, you're set for publishing/managing your services from PowerShell.

Note: If you have multiple Windows Azure Subscriptions associated with your login, the .publishsettings file will contain information for all of them and will default to one of them. You can see them all by using Get-AzureSubscription and can set the default by using Set-AzureSubscription. Many commands also allow you to use a -subscription parameter and specify the subscription name to indicate which subscription to perform the action against.

Deployment

Prior to the spring update the only way to deploy the RubyRole project was to use the pack.cmd batch file, which only packaged up the service; you still had to manually upload it. Windows Azure PowerShell provides functionality to pack and deploy the application straight from the command line.

The cmdlet to deploy the project is Publish-AzureServiceProject. This packages up the project, creates a Windows Azure storage account if one isn't already available, and uploads the project to it. Then it creates a cloud service out of the project and starts it.

This command also takes a -launch parameter, which will launch your browser and navigate to the hosted application after the cloud service is up and running.

Remote Desktop

So you've got an application in RubyRole and it works in the emulator, but blows up in the cloud. What to do? Your first step should probably be to use the Enable-AzureServiceProjectRemoteDesktop cmdlet to turn on remote desktop for the project. Once deployed, you can then use the Windows Azure portal to remote into the virtualized environment, look at logs, debug, etc. To turn this off, just use Disable-AzureServiceProjectRemoteDesktop.

Unfortunately this still only works with the Windows remote desktop client.

Management

There are several other management style things you can do with Windows Azure PowerShell, such as:

  • Stop-AzureService - stop a running cloud service

  • Remove-AzureService - removes a cloud service

  • Start-AzureService - starts a stopped cloud service

You can get a full list of the basic developer cmdlets by running help node-dev or help php-dev and a full list of the Windows Azure cmdlets by running help azure.

Summary

As you can see, these are much better than the simple run.cmd and pack.cmd functions available in the Other SDK. They make it much easier for working with projects like RubyRole from the command line. For more information on Windows Azure PowerShell, see How to Use Windows Azure PowerShell.

Note: These cmdlets won't work with the older version of RubyRole because of some changes to the ServiceDefinition and ServiceConfig structure. Also because they rely on a deploymentSettings.json file in the root of your web project.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Allesandro Del Sole (@progalex) began a series with Your Data Everywhere (Part 1 of 3): OData Support in Visual Studio LightSwitch on 6/28/2012 for InformIT. From the introduction:

imageIn part 1 of a three-article series, Alessandro Del Sole, author of Microsoft Visual Studio LightSwitch Unleashed, describes a useful addition to LightSwitch: support for Open Data Protocol (OData) data sources. Learn how to work with OData services from your LightSwitch apps.

imageThis article describes a new, important addition to Visual Studio LightSwitch 2012, which is the support for data sources of type Open Data Protocol (OData). I'll explain how to consume OData services in your LightSwitch applications. In part 2 of this series, I'll show you how to expose LightSwitch data sources as OData services to other clients. …

Allesandro continues with a detailed tutorial.


Ken Craigo explained how to Use LightSwitch to search Employee Details in Active Directory in a 6/29/2012 post:

Background
imageI develop applications for the Global Security, Investigations and Legal Departments of a Fortune 100 company in Silicon Valley with offices all over the world. Lately almost all applications I develop involve looking up Employee information, I'm always asked can you include the ability to look up the 1st and 2nd level managers?

Fortunately Visual Studio LightSwitch with the help of the LightSwitch Team’s "LightSwitch Active Directory Sample" makes this really simple.

Project Description
A simple lookup example that allows a user to look up the 1st and 2nd level managers for a given EmployeeID.

Note: I'm using the EmployeeID field because that is a guaranteed unique identifier, your search term may be different.

A great free tool for looking up Active Directory information is LDAP Browser by Softerra, http://www.ldapbrowser.com/info_softerra-ldap-browser.htm

*** Before we begin I have to point out that I'm using Visual Studio Professional 2012 RC for this tutorial. ***

This tutorial will also work for the previous version of Visual Studio LightSwitch either as an add in or the standalone version. I use Visual Studio Professional with the LightSwith add in at work.

I left the default project type as Desktop but this will also work as a Web Project.
Steps:

  1. Create a new LightSwitch project, I chose the C# version for this tutorial and I'm naming mine "ADEmployeeSearch"
  2. Add a table to this project called EmployeeData with these 3 fields EmployeeID (String) (Required), FirstLevelManager (String), SecondLevelManager (String).
  3. Download the "LightSwitch Active Directory Sample" again I chose to download the C# version, a VB version is available as well.
  4. Unzip the archive in WinZIP or your favorite Archive handling application.
  5. Add a "List and Details Screen" and for Screen Data choose the EmployData Table and name this new screen EmployeeDetails.
    Your screen should look similar to this:

    Notice I'm leaving the First Level and Second Level Manager fields in place, if you do not use Active Directory you are still free to enter this information manually. And if you do use Active Directory these values will be overwritten by the returned values from AD.
  6. We now need to switch to File View and add two files from the Active Directory Sample, ActiveDirectoryHelper.cs and ApplicationDataService.cs
    Right Click the Server folder and select Add-Existing Item then navigate to ActiveDirectoryHelper.cs, this is located in LightSwitch Active Directory Sample\C#\LDAP_CS_Demo\Server, we will need to make a couple changes to this file shortly.
  7. If the folder UserCode has not been created yet, create a new folder under the Server folder and name it UserCode, Right Click UserCode and select Add-Existing Item, then navigate to ApplicationDataService.cs, this is located at LightSwitch Active Directory Sample\C#\LDAP_CS_Demo\Server\UserCode. We will need to make a couple changes to this file as well.
  8. Open ActiveDirectoryHelper.cs, this file contains the most common fields in Active Directory but if your company has custom fields you will need to add them to this file if you intend on using them as search parameters. Add the following fields to the String Constants Region. Keep in mind these values may be different in your company's Active Directory, consult your administrator or if you have the appropriate privs use the LDAP Browser.
    Add the following:
    public const string EMPLOYEEID = "sAMAccountName";
    public const string HIREDATE = "whenCreated";
    public const string COUNTRY = "co";
    The last two entries are not really necessary for this tutorial but you may find them to be useful if you wish to expand on this tutorial. You can now close the ActiveDirectoryHelper.cs we are finished with this file for now.
  9. Add two references to your Server project one for System.DirectoryServices and another for System.Configuration (We will use ConfigurationManager to read in values from our Web.config file, as you'll see later on).
  10. Build the project then open the EmployeeData table in the designer and click the Write Code dropdown and select EmployeeDatas_Inserting, this will open the ApplicationDataService.cs file and you will now find the EmployeeDatas_Inserting method stub has been added for you.
    Important TIP
    : You can't call the server code directly from the client, all interactions between the client and server happen within the Save, Inserting, and Updating pipelines with LightSwitch applications.
  11. Ok in the default ApplicationDataService.cs, the LDAP directory is currently hard coded, this is not ideal if you wish to sell or give away your application to other clients, this value will change from client to client, so to make it possible for the clients Administrator to modify this setting and allow our code to read in whatever the current value may be, we need to modify our Web.config file, you can find this file using Windows Explorer, ADEmployeeSearch\ADEmployeeSearch\Server.
    Add the following two entries after <appsettings> xml tag
    <add key="LDAPAVAILABLE" value="true"/>
    <add key="LDAPDirectory" value="LDAP://dc=[your domain],dc=com"/>
    ***NOTE: You need to replace [your domain] consult your Network Administrator
  12. In ApplicationDataService.cs delete the line
    string domain = @"LDAP://mydomain.foo.com";
    and add the following two declarations:
    // We use ConfigurationManager to read in the values we added to our Web.config file
    string domain = ConfigurationManager.AppSettings["LDAPDirectory"];
    Boolean ActiveDirectoryAvailable = Convert.ToBoolean(ConfigurationManager.AppSettings["LDAPAVAILABLE"]);
  13. Delete the methods DistributionLists_Inserting and CreateMembers
    We won't be using them in this example.
  14. Add the following properties after the ActiveDirectoryAvailable property
    string Name = string.Empty;
    string EmailAddress = string.Empty;
    string Phone = string.Empty;
    string Title = string.Empty;
    DateTime HireDate;
    string Country = string.Empty;
    string Department = string.Empty;
    string FirstLevelManager = string.Empty;
    string FirstLevelManagerEmail = string.Empty;
    string SecondLevelManager = string.Empty;
  15. Add the following method
    private void SearchEmployeeRecord(string EmployeeID)
    {
    string[] props = {
    ActiveDirectoryInfo.strings.DISPLAYNAME,
    ActiveDirectoryInfo.strings.EMPLOYEEID,
    ActiveDirectoryInfo.strings.EMAIL,
    ActiveDirectoryInfo.strings.PHONE,
    ActiveDirectoryInfo.strings.TITLE,
    ActiveDirectoryInfo.strings.REPORTSTO,
    ActiveDirectoryInfo.strings.COUNTRY,
    ActiveDirectoryInfo.strings.DEPARTMENT,
    ActiveDirectoryInfo.strings.HIREDATE};
    var propResults = ActiveDirectoryInfo.UserPropertySearchByName(EmployeeID, domain, props);
    //Parse out the hiredate
    string hdate = propResults[ActiveDirectoryInfo.strings.HIREDATE];
    // Determine if this is an actual Employee
    if (hdate != ActiveDirectoryInfo.strings.VALUENOTFOUND)
    {
    Name = propResults[ActiveDirectoryInfo.strings.DISPLAYNAME];
    EmailAddress = propResults[ActiveDirectoryInfo.strings.EMAIL];
    Phone = propResults[ActiveDirectoryInfo.strings.PHONE];
    Title = propResults[ActiveDirectoryInfo.strings.TITLE];
    Country = propResults[ActiveDirectoryInfo.strings.COUNTRY];
    Department = propResults[ActiveDirectoryInfo.strings.DEPARTMENT];
    string[] split = hdate.Split(new Char[] { '/' });
    int hmonth = Convert.ToInt16(split[0]);
    int hday = Convert.ToInt16(split[1]);
    string hyear = split[2];
    string[] split2 = hyear.Split(new Char[] { ' ' });
    int hyr = Convert.ToInt16(split2[0]);
    HireDate = new DateTime(hyr, hmonth, hday);
    //Parse the manager name out of the ActiveDirectory path
    Tuple<string, string> managerKey = ActiveDirectoryInfo.ParseUserAndDomain(propResults[ActiveDirectoryInfo.strings.REPORTSTO]);
    string firstManagerDisplayName = string.Empty;
    string firstManagerReportsTo = string.Empty;
    string[] managerProps = { ActiveDirectoryInfo.strings.DISPLAYNAME,
    ActiveDirectoryInfo.strings.EMPLOYEEID,
    ActiveDirectoryInfo.strings.EMAIL,
    ActiveDirectoryInfo.strings.PHONE,
    ActiveDirectoryInfo.strings.TITLE,
    ActiveDirectoryInfo.strings.REPORTSTO,
    ActiveDirectoryInfo.strings.COUNTRY,
    ActiveDirectoryInfo.strings.DEPARTMENT};
    if (managerKey.Item2 != "")
    {
    var managerName = ActiveDirectoryInfo.UserPropertySearchByName(managerKey.Item1, domain, managerProps);
    firstManagerDisplayName = managerName[ActiveDirectoryInfo.strings.DISPLAYNAME];
    var managerpropResults = ActiveDirectoryInfo.UserPropertySearchByName(managerName[ActiveDirectoryInfo.strings.EMPLOYEEID],domain,managerProps);
    // Parse the 2nd level manager name out of the Active Directory Path
    Tuple<string, string> secondmanagerKey = ActiveDirectoryInfo.ParseUserAndDomain(managerpropResults[ActiveDirectoryInfo.strings.REPORTSTO]);
    if (secondmanagerKey.Item2 != "")
    {
    var seconManagerName = ActiveDirectoryInfo.UserPropertySearchByName(secondmanagerKey.Item1, domain, managerProps);
    firstManagerReportsTo = seconManagerName[ActiveDirectoryInfo.strings.DISPLAYNAME];
    }
    }
    // 1st Level Manager
    FirstLevelManager = firstManagerDisplayName;
    // 2nd Level Manager
    SecondLevelManager = firstManagerReportsTo;
    }
  16. In the method EmployeeDatas_Inserting
    Add the following code:
    if (entity.EmployeeID != "" && entity.EmployeeID != null && ActiveDirectoryAvailable)
    {
    SearchEmployeeRecord(entity.EmployeeID );
    if (this.FirstLevelManager != "")
    entity.FirstLevelManager = this.FirstLevelManager;
    if (this.Name != "")
    entity.EmployeeID = this.Name;
    }
  17. (Optional) Open EmployeeData table, select EmployeeDatas_Updating from the Write Code drop down and enter or copy the following code.
    partial void EmployeeDatas_Updating(EmployeeData entity)
    {
    if (ActiveDirectoryAvailable)
    {
    SearchEmployeeRecord(entity.EmployeeID);
    if (this.FirstLevelManager !="")
    entity.FirstLevelManager = this.FirstLevelManager;
    if (this.Name != "")
    entity.EmployeeID = this.Name;
    }
    }
  18. Build the project once again and correct any errors and if all is well launch the application, Enter an EmployeeID, click OK, click Save. You should now have access to the FirstLevelManager and SecondLevelManager information.

From here you can expand on this tutorial to bring back additional data as required.


Beth Massi (@bethmassi) reported LightSwitch HTML Client Preview Available! on 6/26/2012 (and see Jason Zander’s post below):

imageJust announced on the LightSwitch Team Blog, the HTML Client Preview is now available for MSDN subscribers and will be available to the rest of the world on Thursday. Check out the LightSwitch Developer Center for the download, videos and articles:

Download: Microsoft LightSwitch HTML Client Preview for Visual Studio 2012 (COMING SOON)

Microsoft LightSwitch HTML Client Preview for Visual Studio 2012

Since we announced the HTML client a couple weeks ago, the community has been very anxious to try out the bits (understandably!) so I’m super excited that the release is available today. Please keep up the great conversations and provide us feedback in the LightSwitch HTML Client forum.

image_thumb1The new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client that addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices. With this download you will be able to set up a Virtual Hard Disk (VHD) for evaluation purposes that contains all you need to build, publish, and run touch-centric business applications using LightSwitch. The VHD contains a tutorial to help guide you through the available features. A setup doc is included in the download that contains instructions on how to set up the VHD.

And keep an eye on the LightSwitch Developer Center for more information, videos, articles, etc. as they become available.


Jason Zander (@jlzander) posted Live from TechEd Europe: LightSwitch HTML Client Preview and Visual Studio 2012 Tools for SharePoint on 6/26/2012:

imageThis morning I presented the keynote at TechEd Europe 2012 in Amsterdam, and shared some updates on our tools. If you’re not attending the event in person, you can still tune in online. The keynote video recording is available via live streaming on Channel9, and will be posted on-demand to the TechEd Europe 2012 event page.

The first announcement you’re likely to hear about is our LightSwitch HTML Client Preview release...

LightSwitch HTML Client Preview Availability

image_thumb1At TechEd North America 2012, I showed how LightSwitch is embracing a standards based approach with HTML5, JavaScript and CSS, so you can build companion touch-centric apps that run on multiple devices. This approach allows you to take advantage of the same backend services you’re using across your applications, as well as the productivity gains of LightSwitch.

We’re excited to announce that the LightSwitch HTML Client Preview will be available later today for MSDN subscribers (I’ll update this post once the bits are live), and will be available publicly on Thursday June 28th! To learn more about the release, provide feedback, or ask questions, please visit the LightSwitch Developer Center, team blog, and forums.

Visual Studio 2012 Tools for SharePoint 2010

This morning I also demoed SharePoint tools. With Visual Studio 2012 RC, we’re delivering another compelling release for writing SharePoint 2010 solutions. We’ve developed a rich experience for creating SharePoint lists and content types, so that you no longer need to deal with the complex schema or error-prone hand-editing of XML. Our new SharePoint List Designer allows you to visually and accurately define new lists and content types:

sptools

We’re also working hard to make sure that you get the most accurate IntelliSense when working with SharePoint solutions. When developing a sandboxed solution, we now filter to the APIs that are available in production, so that you get immediate feedback on the right APIs to use. We’ve also augmented IntelliSense to parse JavaScript files that are stored in the SharePoint content database, and now provide IntelliSense for the functions and members in those files.

Visual Studio 2012 RC includes several enhancements for Office 365 development, where SharePoint solutions run in a sandboxed process. For example, the Visual Web Part template has been updated to be compatible with the sandbox and can now be safely deployed to Office 365. We’ve also introduced a new Silverlight Web Part template, in case you prefer to define your Web Parts in XAML. Finally, we’ve improved the experience of deploying sandboxed solutions with a new Publish dialog, which allows you to directly publish to Office 365 or any other remote SharePoint Server.

ALM support for SharePoint development continues to improve in the Visual Studio 2012 RC. We’ve expanded our profiling support so that you can get rich information about the bottlenecks in both farm and sandboxed solutions . I also announced previously that we’ll continue to add ALM support in the first Ultimate Feature Pack, which will feature unit testing support as well as support for SharePoint load testing.

Conclusion

I look forward to hearing from you as you have an opportunity to try out these features.

Enjoy the event, and make sure to check out the view the videos online as they become available!

As of 6/26/2012 at 10:00 AM PDT, the four parts of the LightSwitch HTML Client Preview for Visual Studio 2012 were available for download by MSDN subscribers here:

image

The Details:

We are pleased to announce that the Microsoft LightSwitch HTML Client Preview for Visual Studio 2012 is now available for download. The preview provides an early look at our upcoming support for building cross-browser, mobile web clients with LightSwitch in Visual Studio 2012. The new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client that addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices.

With this download you will be able to set up a Virtual Hard Disk (VHD) for evaluation purposes that contains all you need to build, publish, and run touch-centric business applications using LightSwitch. The VHD contains a tutorial to help guide you through the available features.

That’s the reason the download is 5.7 GB.


Kostas Christodoulou (@kchristo71) began describing his Simple Extension Methods (part 1) on 6/25/2012:

imageAs soon as I started writing LightSwitch applications I noticed that many times I was repeating the same code over and over for trivial tasks. So after all this time I have collected a number of extension methods that I widely use in my apps.
For me reusing code is a must and although the implementation of LS (IMHO) does not provide for this out of the box the underlying framework is ideal for writing extension classes and methods that are a major step towards code reusability. If you have downloaded any of my samples from msdn or have seen my Application Logo post, you already suspect I am an “extension method fanatic”.

So I will present a small series (I don’t know how small) of posts with extension methods from my Base.LightSwitch.Client library.

imageThe first method is one of the first (if not the first one) extension methods I wrote. As soon as you want to override the code for default commands like Edit and Delete for a collection (let’s name it MyCollection) you have to write something like that:

partial void MyCollectionDelete_CanExecute(ref bool result){
  if (this.MyCollection is null || this.MyCollection.SelectedItem == null)
    result = false;
  else
    result = true;
}

This is the minimum code (it can be written more elegantly I know but this is the concept) you need to write. I don’t take into account the permissions issue.
A similar chunk of code has to be written for Edit.
Isn’t the code listed below easier to read:
partial void MyCollectionDelete_CanExecute(ref bool result){
  result = this.HasSelection("MyCollection");
}
It’s not only easier to read but is less error prone than the original. Plus you can inject any security logic in this HasSelection method.
And this is the code:
public static bool HasSelection(this IScreenObject screen, string collectionName) {
  if (!screen.Details.Properties.Contains(collectionName))
    return false;
  IVisualCollection collection =
    screen.Details.Properties[collectionName].Value as IVisualCollection;
  return collection != null &&
         collection.SelectedItem != null &&

         (collection.SelectedItem as IEntityObject).Details.EntityState != EntityState.Deleted;
}
For the ones that dislike using string property names I can suggest this version :
public static bool HasSelection<T>(this VisualCollection<T> collection) where T : class, IEntityObject {
  return collection != null &&
         collection.SelectedItem != nul &&
         collection.SelectedItem.Details.EntityState != EntityState.Deleted;
}
This version is more concrete is generic and also does not have to do the (out of thin air) conversion of SelectedItem to IEntityObject. If you use this version though you have to change your partial method as you cannot extend null:
partial void MyCollectionDelete_CanExecute(ref bool result){
  result = MyCollection!= null && MyCollection.HasSelection();
}
The choice is yours…

Kostas posted Simple Extension Methods (part 2) on 6/29/2012. From the introduction:

imageIn the previous post I presented an extension method used mostly for overriding the edit and delete commands of a collection. One may ask “why do I want to do this?”. Apart from any other requirements/business logic dependent reason one might want to implement, for me there is one simple yet important reason: I don’t like at all (to be kind) the default add/edit modal windows when adding or editing an entry. It’s not a coincidence that the FIRST sample I wrote for LightSwitch and posted in the Samples of msdn.com/lightswitch was a set of extension methods and contracts to easily replace standard modal windows with custom ones.

imageMost of the times when I have an editable grid screen, selecting Add or Edit I DON’T want the modal window to pop-up, I just want to edit in the grid. Or in list and details screen I want to edit the new or existing entry in the detail part of the screen.

This is the main reason I most of the times override the default Add/Edit command behavior. And for this reason I created and use the next two extension methods. …


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Tim Anderson (@timanderson) wrote Microsoft: We tried to use Azure ourselves last year, and couldn't. But now we're fully ready to cannibalise our own server biz for The Register on 6/27/2012:

imageIn the first half of 2011, Microsoft made a series of changes at the top of the team running Windows Azure, its cloud.

“A large group of new people came into the Azure team,” general manager Bill Hilf said at a Microsoft cloud event in London last week. “Satya Nadella came over, Scott [Guthrie] came over, I came over at the same time.”

Nadella is now president of server and tools, while corporate vice president Guthrie, co-inventor of ASP.NET, moved from his job running .NET technology.

imageThe executive shuffle paved the way for an epiphany over the state of Windows Azure and ushered in a period of big changes for Redmond's cloud, Guthrie told The Reg in London during his trip last week for a couple of Windows Azure events.

image“We did an app building exercise about a year ago, my second or third week in the job, where we took all the 65 top leaders in the organisation and we went to a hotel and spent all day building on Azure," said Guthrie.

"We split up everyone into teams, bought a credit card for each team, and we said: ‘You need to sign up for a new account on Azure and build an app today.’"

“It was an eye-opening experience. About a third of the people weren’t able to actually sign up successfully, which was kind of embarrassing. We had billing problems, the SMS channel didn’t always work, the documentation was hard, it was hard to install stuff.

“We used that [experience] to catalyze and said: 'OK, how do we turn this into an awesome experience?' We came up with a plan in about four to five weeks and then executed.”

imageThe changes were fundamental. Azure now offers Amazon-like Infrastructure as a Service (IaaS). Previously, Azure virtual machines (VMs) were always stateless. Applications could write to the local drive or registry, but those changes could revert at any time.

The new Azure supports durable VMs alongside the old model. It also has a new admin portal based mostly on HTML rather than Silverlight; new command line tools; a new hosted website offering which starts from free; new virtual networking that lets you connect Azure to your on-premise network; new SDKs for .NET, Node.js, PHP, Java and Python; and performance features including a distributed cache and solid-state (SSD) storage.

What was required to enable stateful VMs?

“A lot of the work comes down to storage,” said Guthrie. “Making a VM work is relatively easy. Making it work reliably is hard. We’ve spent a lot of time on the storage system, architecting it so you could run VM disks and VM images off our storage system, which gives us much more scale, much more reliability, much more consistent performance.

“There was a lot of work at the networking layer. VMs want to be able to use UDP in addition to TCP. It was a pretty massive effort that consumed a lot of the last year. The result is an environment where you can literally stand up a VM and install anything you want in it.”

Azure supports Linux as well as Windows and multiple platforms including Java, PHP, Python and more. Why the proliferation?

“In a cloud environment, especially for enterprise customers, you don’t typically category shop," says Guthrie. "You’re not going to buy your load balancer from cloud vendor A, and VMs from cloud vendor B. Instead you are going to go to a vendor shop and pick a cloud platform, and run all your infrastructure in it. We can now be the vendor that someone bets on for the cloud.

“Office 365 and Azure run in the same data centres, so traffic between them is fast and secure. We can run any workload that an enterprise has: whether it is big data, whether it is Java app server, whether it is .NET, email, SharePoint – we got it.”

Azure supports resiliency – through availability sets that run on separate hardware in Microsoft’s data centres – and scaling, through a load-balancing service to which you can add VMs. Elasticity is not yet fully automatic, however.

“You could use our dashboard or you could use our command-line console app to spin-up or spin-down instances," says Guthrie. "We also have something called WASABI [Windows Autoscaling Application Block], which is a pre-built set of scripts that does that automatically. We support that with a pre-packaged project. Or you can just write your own... Long term you’ll see us add – directly in the portal – the ability to set up step functions based on load.”

El Reg asked Guthrie how Azure storage uses SSDs.

“It’s for journaling. It’s not so much storing your bits; it’s making sure that read and write operations are really smooth and fast. The biggest benefit is consistency. Writing an app to handle multi-second variance is hard. We try to have our standard deviation be low.”

Amazon price beater

Is Microsoft aiming to be price-competitive with Amazon? Guthrie prevaricated a little. “Our retail hourly prices I think are the same as Amazon’s. We are looking to be cost effective. More than price though, it’s really value of service. I don’t typically run into people saying cost is the biggest barrier to cloud, and those people include both Amazon and Azure customers. It’s more that it all fits together, there’s one REST API to manage it, you can use System Center, you can use a web portal, you can use any language. We’d like to be the Mercedes of the cloud business, as opposed to the cheapest.”

What are the implications for Microsoft’s partners as the company takes on more of its own cloud hosting?

“There is plenty of opportunity in the market for both us.” Guthrie insists. “We love the cloud, we love the server business. We make most of our money in the server business. The approach that we’re taking with Azure is that we want the two to work together.”

It's a good line, but it is hard to see how Microsoft can avoid cannibalising its own business. Then again, from Microsoft’s point of view, better to cannibalise that business than see it go to Amazon.


Steve Plank (@plankytronixx) produced a 00:06:59 Video: Complete Overview of the 2012 Azure Release on 6/29/2012:

Includes IaaS Virtual Machines, Virtual Network, Web Sites, Cloud Services, Identity, Service Bus, Cache and so on.

That’s a lot of ground to cover in seven minutes.


The Windows Azure Service Dashboard reported Limitations on Compute and Storage Accounts for New Users of the North Central or South Central US Data Centers in late May 2012:

imageNorth Central US and South Central US regions are no longer accepting Compute or Storage deployments for new customers. Existing customers as of June 24th (for North Central US) and May 23rd (for South Central US) are not impacted. All other services remain available for deployment, and new regions "West US" and "East US" are now available to all customers with the full range of Windows Azure Services.

image

 

See Mark Brown’s clarification of the new service restrictions below.


Steven Martin (@stevemar_msft) of the Windows Azure Team discussed Datacenter Expansion and Capacity Planning in a 5/24/2012 post (updated 5/29/2012) for the new US West and East data centers:

Editor’s Note: This post was updated on May 29, 2012 to reflect availability of SQL Azure in the “West US” Region.

imagePeople’s ears usually perk-up when they hear Windows Azure uses more server compute capacity than was used on the planet in 1999. We are excited and humbled by the number of new customers signing up for Windows Azure each week and the growth from existing customers who continue to expand their usage. Given the needs of both new and existing customers, we continue to add capacity to existing datacenters and expand our global footprint to new locations across the globe.

imageTo anticipate the capacity needs of existing customers, we closely monitor our datacenters capacity trends. To ensure customers can grow their usage in datacenters in which they are already deployed, datacenters that hit certain thresholds are removed as options for new customers. Today, we are removing compute and storage services as options for new customers in the South Central US region. Existing customers already deployed into South Central are not impacted. SQL Azure, Service Bus, Caching, and Access Control remain available in South Central to new customers.

As we announced in a recent blog post, two new US datacenter options ("West US" and "East US") are available to Windows Azure customers. Today we are announcing the availability of SQL Azure in the "East US" and "West US" Regions to complement existing compute and storage services.

We appreciate the incredible interest our customers are showing in Windows Azure, and will communicate future news around our growing footprint of global datacenters as new options come online. As always, the best way to try Windows Azure is with the free 90-day trial.


Mark Brown (@markjbrown) of the Windows Azure Team described in a 6/28/2012 message an Update to Storage/Transactions benefits for Azure Insiders (Azure Pass) and MSDN subscribers, as well as clarified restrictions on creating new Cloud Services in Microsoft’s North Central or South Central US data centers:

imageWanted to let you know that we’ve updated our storage size and storage transactions for these customers below (including Azure Pass).

So all customers will now get the following, depending on their offer:

90-day Free Trial / Azure Pass

MSDN Professional / Cloud Essentials

MSDN Premium

MSDN Ultimate / BizSpark

Storage

35GB

35GB

40GB

45GB

Storage Transactions

50M

50M

75M

100M

Customers with a presence in North Central or South Central US Datacenters can add to their subscription that includes the use of service in North/South Central or create new subscriptions and deploy to North Central.

imageI would have appreciated a bump in the number of free hours for Cloud Services, also.


Karthikeyan Anbarasan continued his series with a Getting Started with Windows Azure Media Services – Connecting to Azure Media Services Programmatically – #Meet Azure Edition post of 6/26/2012 to F5’s Debug blog:

imageIn this tutorial we are going to see how to programmatically connect to the Windows Azure Media Services using the Windows Azure Media Services SDK for .NET application development. In our earlier articles we have seen What is Windows Azure Media Services () and What are the steps to configure the Windows Azure Media Services Account (). Now in this tutorial we will write some code using the Visual Studio 2010 IDE and see how to connect to the Windows Azure Media Services and create a Cloud Context which is the key which holds all the necessary information of the entities that are used with the Azure Media Services from the application development perspective.

imageOpen Visual Studio 2010 IDE and create a new Windows Application project or a WPF project with a valid project name, which be used in this series to explore the Windows Azure Media Services core features one by one as shown in the screen below.

image

Now let us design the page with some controls which basically required to connect to the Windows Azure Media Services using the CloudMediaContext class. The Server context provides the complete access to all the entities that are required to access the media objects like assets, files, jobs, tasks etc. Once we designed our screen it looks like below.

image

Next step is to add the Media Services reference, we can see the reference dll (Microsoft.WindowsAzure.MediaServices.Client.dll) available on the location of the SDK installed in the development environment i.e. C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0 as shown in the screen below.

image

Next step is to add a APP.Config file where we are going to provide the Account Name and Account Key as the configuration which can be changed later based on the needs as shown in the screen below.

image

Now in the code behind declare a private variable and get the account name and the account key which can be used while creating an instance of the CloudMediaContext as shown in the code below. CloudMediaContext class provides the complete details of the entities that can be used in the application to manipulate the media object to its needs.

Code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Configuration;
using Microsoft.WindowsAzure.MediaServices.Client;

namespace MeetAzureMediaServices
{
public partial class MainWindow : Window
{
private static CloudMediaContext cmContext = null;
private static readonly string strMsAccountName = ConfigurationManager.AppSettings["msAccountName"];
private static readonly string strMsAccountKey = ConfigurationManager.AppSettings["msAccountKey"];

public MainWindow()
{
InitializeComponent();
}

private void button1_Click(object sender, RoutedEventArgs e)
{
cmContext = GetContext();
}

static CloudMediaContext GetContext()
{
return new CloudMediaContext(strMsAccountName, strMsAccountKey);
}
}
}

image

Now we are done with the code, we can build and execute the project, we can see the application run successfully without any errors. We will not see any expected output as we are not catching any of the details to show as an output, but yes we have created a CloudMediaContext which has all the entities that can be utilized as per the requirement which we can see the list of entities available using the debugging mode as shown in the screen below.

image

So here in this tutorial we have seen how to programmatically connect to the Windows Azure Media Service and create a context holding the entities that are used to manipulate the required media objects as per the requirement.


Karthikeyan Anbarasan posted Getting Started with Windows Azure Media Services – Setting up the Media Services Preview Account – #Meet Azure Edition to F5’s Debug blog on 6/25/2012:

imageIn our earlier article we have seen about what is Windows Azure Media Services and the different terminologies and operations that are involved with the same. Here in this tutorial we are going to see how to set up the Windows Azure Media Services Preview account to start developing the application with the Visual Studio 2010 IDE and deploy to the Azure environment. To get clear idea on the Windows Azure Media Services first I would suggest to get in with this article which describes clearly on the basic of the Media Services step by step “Getting Started with Windows Azure Media Services – #Meet Azure Edition”. [Link to earlier article added.]

imageCurrently we have Windows Azure Media Services is in Preview and in order to use the environment we need to do some steps which provides us with the required keys basically the Account Key which is required to start using the Media Services with the code base.

Step 1 – Sign in to Windows Azure Portal using the link http://Windows.Azure.com with a valid subscription and register for the Media Services preview which will be available under the Preview Features section of Account tab as shown in the screen below. Since my subscription is already subscribed we will see You are active, else we can see a default message of Try it now.

image

Step 2 – Click on Try it Now will post the request to the Windows Azure team and we can see the request is queued, once we get the approval mail and status shows You are Active we can start using the Media Services from the code. Next step is to see if all the prerequisites are installed correctly (See the prerequisites specified in this article “Getting Started with Windows Azure Media Services – #Meet Azure Edition”). Install the missing software in order to avoid any interruption which setting up the Window Azure Media Services Account.

Step 3 – Start creating a Windows Azure Storage Services (Basically used to store the media content) within the list of regions where Windows Azure Media Services are available. The regions involved are (West Europe, Southeast Asia, East Asia, North Europe, West US, East US). To see the steps on how to create the Windows Azure Storage Services refer to the article “Windows Azure – Creating New Storage Account”. Once the storage is created we can see the storage listed as shown in the screen below.

image

Step 4 – Now install the Windows Azure Media Services SDK which can be downloaded from the link “Windows Azure Media Services SDK 1.0”. As a part of Prerequisites if the SDK is installed please leave this step and proceed to the next step.

Step 5 – Open Windows PowerShell v2.0 or greater (if you are proceeding with the setup on a Windows 8 Machine then it has inbuilt PowerShell v3.0 installed) so open the PowerShell ISE in administrator mode by right clicking and selecting “Run as Administrator” which opens the PowerShell in Administrator mode as shown in the screen below.

image

Step 6 – Next step is to change the directory to the path where we installed the Windows Azure Media Services SDK, use the below script to change to the path. Note to keep the path in “” so that you will not get error.

Script – cd “C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0”

image

Step 6 – Next step is very important that we are going to activate the Windows Azure media Account using the below script, this script first creates a Management Certificate internally and uploads to the Media Server once the script is executed. If the script is executed correctly we can see a new Browser opened with the steps on how to proceed after installation as shown in the screens below.

Script: PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> .\GetMediaServicesEnv.ps1

image

After successful registration of the media services account:

image

Once things are done correctly we can see a file (PublishSettings) prompted to download to the local machine, this file has the necessary information which are basically required while retrieving the account key. So Save the file to the local machine and the file contains the the management certificate details as shown in the screen below.

image

Step 7 – Next step is to get the endpoint information to which the service is pointed to, so use the below script (which has the path to the downloaded Management Certificate) and execute it in the Windows PowerShell as shown in the screen below. On successful execution we will be getting the management service endpoint, certificate thumbprint, and the subscription Id of the Windows Azure account.

Script:

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> .\SetMediaServicesEnv.ps1 -path “D:
\Path\download.publishsettings”

image

image

Step 8: Run the below script by providing the details of the 3 parameter outputs which we obtained from our last script as shown in the screen below.

Script :

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> $context = Get-MediaServicesManagem
entContext -managementserviceendpoint “https://management.core.windows.net/” -managementcertthumbprint “XXXXXXXXXXXXXXXXXXXXXXXX” -subscriptionid “XXXXXXXXXXXXXXXXXXXX”

image

image

Step 9 – Next step is to check on which particular region we are going to create the account, basically now Windows Azure Media Services are available in few of the regions where Microsoft keeps on working to increase the availability zones one by one. To get the list execute the below script and we can see the result listed, select one region from the list and keep it aside as shown in the screen below.

Script -

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> Get-MediaServicesAvailableRegions -
managementcontext $context

image

Step 10 – This step is important, before we proceed with giving a Media Service account name for our application first we need to check if the Media Service name is available. Since this is globally available there is a possibility some one from different region can used the name, so to check if the Account Name is available or not run the below script with your favorite name in the string as shown in the script and screen below. We will get result as True or False based on the availability.

Script -

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> Get-MediaServicesAccountAvailabilit
y -managementcontext $context -AccountName “F5debugMediaServices”

image

image

Step 11 – We can see the Account Name is available, now we are ready with all the required information. Execute the below script which creates the account with the Windows Azure Media Services as shown in the screen below.

Script :

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> Add-MediaServicesAccount -Managemen
tContext $context -AccountName “F5debugMediaServices” -StorageAccountName “f5debugstorage” -StorageAccountKey “Primary or Secondary Key”-Region “US_East” –BlobStorageEndpoint http://storage.blob.core.windows.net/

image

image

Step 12 – On providing the information correctly and the scripts executed without any errors we can see the account gets created and we can see the Account ID and the Subscription details as shown in the screen below.

[Image missing]

Step 13 – Now we need to retrieve the Account Key using which only we will be connecting to the Media Services from the code behind, to get the media services Account key run the below script as shown in the screen below. (Need to execute both the scripts)

Script :

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> $accountdetails = Get-Mediaservices
accountdetails $context –AccountName “F5debugMediaServices”
PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> $accountdetails.accountkey

image

Step 14 – Once we get the Account key, to get the complete details of the account execute the below script as shown in the screen below which gives the account details as well as the account key.

Script :

PS C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0> Get-MediaServicesAccounts $context

image

Now we are done with all the necessary steps to register the account and as well got the Account Name and Account Key which are used to connect to the Windows Azure Media Services from the code to programmatically do the manipulations as per the requirement. We will use these details and connect to the media server from the code in our next tutorial. Until then Happy Programming!!!

Related articles

Rick Saling (@RickAtMicrosoft) described Windows Azure Performance: Best Practices in a 6/26/2012 post:

imageThis topic, Best Practices for Performance in Windows Azure Applications, appeared with the recent Windows Azure Spring 2012 refresh. It's focussed on design issues that affect performance. A related topic that was published at the same time, Troubleshooting in Windows Azure, discusses the run-time side of things.

imageA lot of my topic deals with Windows Azure SQL Database (the database previously known as "SQL Azure"), and especially Federations. In that light, I want to call out a great video of a presentation on Federations by Cihan Biyikoglu at the 2012 US Tech Ed conference in Orlando. Cihan's blog is also a great source of information about Federations.

Federations illustrates an interesting paradox concerning Windows Azure performance. Individual operations in Windows Azure are often slower than the corresponding operations on an in-premises server, for a number of reasons, including latency, and automated fail-over (nodes, including persistent data, are generally replicated a number of times: this takes time, and if a node goes down, and fails over to a secondary node, this also takes time). But this can be more than compensated for by the scaling out of resources, and Federations is an outstanding example of this. The result can be a massive parallelization of work, which can greatly increase performance.

imageWhile database scale-out is the commonest example, the same techniques can be applied to other Windows Azure resources.

Another important aspect of Windows Azure performance is the need to test your architectural design by early-on creating a proof of concept application. Windows Azure contains a lot of "moving parts", and it is in a state of rapid evolution and development. Your exact combination of parts has not necessarily been tested by Microsoft (a Computer Science class in Combinatorics will clarify why this is (hint: look at the factorial function...)), and you should validate your architecture early in the development cycle, rather than trying to fix perf problems when about to deploy your application, or even worse, after it's in production (this has of course never happened to me :-)).

Finally, Eric Lippert has a video where he talks briefly about performance as an engineering discipline. Actually most of the video is about C#'s new features and also about Roslyn... But his remarks about performance are worth hearing.

I want to give a shout-out to another topic in the Best Practices series, that was published along with the Performance and Trouble-Shooting ones: Windows Azure Security Guidance.

Finally, as always I welcome feedback! Windows Azure performance is a vast area, and there are undoubtedly areas that could use more detail. And it is a continuously evolving platform, so new issues are likely to appear.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Brad Anderson posted Beyond Server Virtualization: Common Technologies for Your Move to Cloud Computing on 6/22/2012:

image_thumb2Gartner just released their 2012 Magic Quadrant for x86 Server Virtualization Infrastructure*, and I am very happy to report that Microsoft is listed as a leader. You can download the full report at the link above, and I hope you read the research in detail. The report reviews Hyper-V in Windows Server 2008 R2 – keep in mind, the virtualization and private cloud capabilities in Windows Server 2012 are even better! You don’t have to wait for the general availability of Windows Server 2012 later this year. You can download the release candidate right now.

imageYou can also read two other newly-released reports: Hyper-V and SQL Server 2012 enterprise workload performance report from Enterprise Strategy Group, and an IDC white paper sponsored by Microsoft, Delivering Private Clouds today with System Center 2012 (with a supporting webcast). These add to the growing quantity of recognition for the enterprise readiness of Microsoft’s virtualization and private cloud solutions.

Last year marked a significant milestone. There are now more virtualized operating systems installed globally than there are non-virtual instances.

But this is just the beginning. We know that IT leaders want to move beyond virtualization and need the flexibility to use capacity from multiple clouds. So at our MMS conference in April, I announced the availability of Microsoft System Center 2012, a solution that lets you manage your applications wherever they are, across physical, virtual, private cloud and public cloud environments. Together, System Center 2012 and Windows Server are optimized to help businesses explore cloud computing easily and more affordably, making cloud computing approachable for almost all organizations. At MMS, one of my favorite demos showed how System Center 2012 can set up a basic private cloud infrastructure using existing servers in less than a minute. That’s fast.

I encourage you to read this article by our Director of Product Marketing Edwin Yuen, which will go into even more detail on reasons to choose Hyper-V for your virtualization and private cloud infrastructure.

*Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


<Return to section navigation list>

Cloud Security and Governance

image_thumbNo significant articles today.


<Return to section navigation list>

Cloud Computing Events

The Google Cloud Platform Team posted Google Compute Engine Pricing on 6/28/2012:

imagePricing for virtual machine resources

Google Compute Engine currently offers the following 4 machine types. We will be offering additional configurations in the future including smaller types to help developers get started easily, as well as larger types to support more powerful scaling of applications.

Configuration Virtual Cores Memory GCEU* Local Disk Price/Hr $/GCEU/ Hr
n1-standard-1-d 1 3.75GB ** 2.75 420GB ** $0.145 0.053
n1-standard-2-d 2 7.5GB 5.5 870GB $0.29 0.053
n1-standard-4-d 4 15GB 11 1770GB $0.58 0.053
n1-standard-8-d 8 30GB 22 2 x 1770GB $1.16 0.053

* GCEU is Google Compute Engine Unit -- a measure of computational power of our instances based on industry benchmarks; review the GCEU definition for more information
** 1GB is defined as 2^30 bytes …

According to the Google Compute Engine document for developers’ Instances page:

GCEU (Google Compute Engine Unit), or GQ for short, is a unit of CPU capacity that we use to describe the compute power of our instance types. We chose 2.75 GQ’s to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge platform.

The Pricing page continues with Network Pricing, Persistent Disk Pricing and IP Address pricing.


Thorsten von Eicken reported RightScale Joins Google Compute Engine for Launch Day in a 6/28/2012 post, which begins:

imageToday we’ll be demoing RightScale managing a deployment on Google Compute Engine during the launch presentation at Google I/O at 1:30pm (PT). With the release of Google Compute Engine, the year 2012 is becoming a turning point in the evolution of cloud computing. There are now multiple public megaclouds on the market, and public cloud computing is set to become the dominant form of business computing (mobile arguably becoming the dominant form of consumer computing). I’ll come back to why I am convinced of this at the end, but first let’s focus on Google Compute Engine.

Google Compute Engine in the RightScale DashboardWe’ve been working for months with the team at Google building out Google Compute Engine to ensure that everything is ready for our customers to leverage it. We realized very quickly that Google Compute Engine is an all-out effort to build a world-class cloud on one of the most awesome global computing infrastructures.

It also became clear that Google Compute Engine is comparable to the most successful infrastructure clouds in the market but not a clone in any way. The team at Google has leveraged the depths of Google’s engineering treasure trove to bring us their take on how a cloud platform ought to look. Yes, this means that Google Compute Engine is not API compatible with any other cloud. Yes, it also means that resources in Google Compute Engine behave slightly differently from other clouds. However, to RightScale users this will not be an obstacle as our platform takes care of the API differences and our ServerTemplates accept and even leverage the more important resource differences. We actually welcome these differences. …

And continues with a 00:04:02 video featuring Thorsten and a signup offer for a private beta:

Overall, Google Compute Engine has been a pleasure to work with, which is perhaps best summed up by RightScale customer Joe Emison who says, “[we] have found the performance of the Google Compute Engine VMs to be the most consistent of any other virtualized architecture we’ve used.” Joe is VP of Research and Development at BuildFax and a long-time RightScale customer who helped us test drive Google Compute Engine. We now look forward to onboarding many more customers, and invite you to sign up for the Google Compute Engine with RightScale private beta. …

The post contains some interesting commentary about Google’s new IaaS offering. I’ve signed up for the beta.


William Vambenepe (@vambenepe) posted Google Compute Engine, the compete engine on 6/28/2012:

imageGoogle is going to give Amazon AWS a run for its money. It’s the right move for Google and great news for everyone.

But that wasn’t plan A. Google was way ahead of everybody with a PaaS solution, Google App Engine, which was the embodiment of “forward compatibility” (rather than “backward compatibility”). I’m pretty sure that the plan, when they launched GAE in 2008, didn’t include “and in 2012 we’ll start offering raw VMs”. But GAE (and PaaS in general), while it made some inroads, failed to generate the level of adoption that many of us expected. Google smartly understood that they had to adjust.

image“2012 will be the year of PaaS” returns 2,510 search results on Google, while “2012 will be the year of IaaS” returns only 2 results, both of which relate to a quote by Randy Bias which actually expresses quite a different feeling when read in full: “2012 will be the year of IaaS cloud failures”. We all got it wrong about the inexorable rise of PaaS in 2012.

But saying that, in 2012, IaaS still dominates PaaS, while not wrong, is an oversimplification.

At a more fine-grained level, Google Compute Engine is just another proof that the distinction between IaaS and PaaS was always artificial. The idea that you deploy your applications either at the IaaS or at the PaaS level was a fallacy. There is a continuum of application services, including VMs, various forms of storage, various levels of routing, various flavors of code hosting, various API-centric utility functions, etc. You can call one end of the spectrum “IaaS” and the other end “PaaS”, but most Cloud applications live in the continuum, not at either end. Amazon started from the left and moved to the right, Google is doing the opposite. Amazon’s initial approach was more successful at generating adoption. But it’s still early in the game.

As a side note, this is going to be a challenge for the Cloud Foundry ecosystem. To play in that league, Cloud Foundry has to either find a way to cover the full IaaS-to-PaaS continuum or it needs to efficiently integrate with more IaaS-centric Cloud frameworks. That will be a technical challenge, and also a political one. Or Cloud Foundry needs to define a separate space for itself. For example in Clouds which are centered around a strong SaaS offering and mainly work at higher levels of abstraction.

A few more thoughts:

  • If people still had lingering doubts about whether Google is serious about being a Cloud provider, the addition of Google Compute Engine (and, earlier, Google Cloud Storage) should put those to rest.
  • Here comes yet-another-IaaS API. And potentially a major one.
  • It’s quite a testament to what Linux has achieved that Google Compute Engine is Linux-only and nobody even bats an eye.
  • In the end, this may well turn into a battle of marketplaces more than a battle of Cloud environment. Just like in mobile.

I batted my eye when I learned that Google Compute Engine was Linux-only. That cuts the number of potential enterprise users by a substantial percentage.


Google’s Peter S Magnusson (@PeterSMagnusson) reported Google Compute Engine launches, expanding Google’s cloud offerings in a 6/28/2012 post to the Google App Engine blog:

imageToday at Google I/O we were pleased to announce a new service, Google Compute Engine, to provide general purpose virtual machines (VMs) as part of our expanding set of cloud services. Google App Engine has been at the heart of Google’s cloud offerings since our launch in 2008, and we’re excited to begin providing developers more flexible, generalized VMs to complement our fully-managed, autoscaling environment.

imageApp Engine has been growing rapidly since leaving preview, and we’re excited about the benefits that Google Compute Engine brings to developers who want to combine the advantages of App Engine’s easy-to-use, scalable, managed platform with the flexibility of VMs.

If you are interested in using VMs with your App Engine applications in the future, let us know by signing up here.

Signed up, but I’m not sanguine about my chances of being onboarded quickly. I also bought a 16GB Nexus 7 out of curiosity and to compare with the forthcoming Windows Surface tablet when it becomes available for purchase.

Stay tuned to the OakLeaf blog for more posts about the Google Cloud Platform and Google Compute Engine today and over the weekend.


The Google App Engine Team (@app_engine) announced Google App Engine 1.7.0 Released at Google I/O in a 6/27/2012 post to the Google App Engine blog:

imageEach release is special in its own way, but this time we can’t help but be extra proud. From San Francisco to Sydney we’ve taken an extra week to pack in some of our most widely requested features and prepare a host of talks and announcements for Google I/O.

We’ll be bringing you more information about this release and the future of Google App Engine platform, as well as some exciting announcements from our I/O YouTube live stream. We’ll also be posting highlights from I/O on our blog and Google+, so tune in here for updates the rest of this week.

Without further ado, here are the highlights from our 1.7.0 release:

App Engine SSL for Custom Domains
Starting today, developers can serve their applications via HTTPS on custom domains. We’re offering both SNI and VIP based SSL, which provide both a low cost and universally supported option, respectively.
Server Name Indication (SNI)

  • This allows multiple domains to share the same IP address while still allowing a separate certificate for each domain. SNI is supported by the majority of modern web browsers. SNI is priced at $9/month which includes the serving of 5 certificates.

Virtual IP (VIP):
  • A dedicated IP address is assigned to you for use with your applications. VIP is supported by all SSL/TLS compatible web clients and each VIP can serve a single hostname, wildcard or multi domain certificate. A VIP will cost $99/month.


Google App Engine’s additional location - the EU
For the past four years, App Engine applications have been served from North America. However, we understand that every ms of latency counts so we’ve turned up an App Engine cluster in the European Union so that our developers with customers primarily in Europe can have confidence that their site will look as fast as they’ve designed it.

Initially, the Google App Engine cluster in the European Union will be limited to Premier Accounts only. If you are interested in signing up for a Premier Account to get access to our European cluster, as well as Premium Support and invoice billing, please contact our sales team at appengine_premier_requests@google.com.


PageSpeed - Making the Google App Engine Powered Web Faster
At Google we’ve had an ongoing commitment to making the web faster and for almost a year the PageSpeed Service team has been enabling websites to optimize their static content for delivery to end users at lightning fast speed. Today we’re making this service available to our HRD applications with just a few clicks. Use of the PageSpeed Service is priced at $0.39 per GB of outgoing bandwidth, in addition to standard App Engine outgoing bandwidth price.

GeoPoint Support in Search
Our Search team deserved a break after releasing the Search API a month and a half ago, but instead they’ve worked hard to announce exciting improvements at Google I/O. You can now store latitude and longitude as a GeoPoint in a GeoField, and search documents by distance from that GeoPoint.
Other Service Updates

Here are some other amazing updates we have this release:

  • Blob Migration Tool now Generally Available - Since the deprecation announcement for Master/Slave Datastore (M/S), we’ve been continually improving the experience for apps migrating from M/S to HRD. We’re happy to announce that the Blob Migration tool is now generally available, so you can migrate both your Blobstore and Datastore data in one easy step.
  • Application Code Limits Raised from 150MB/version to 1 GB/application - For those of you biting your fingernails every time you update your application, wondering if today will be the day you finally reach the 150MB application version limit, fret not! We’ve updated the application size limit to be 1GB total for all versions of your application. You can check your app’s Admin Console to see the total size of all your application versions. In the future, you’ll be able to purchase more quota to store additional files.
  • Logs API Updates - Paid applications will now be able to specify a logs retention time frame of up to 1 year for their application logs, provided that the logs storage size specified is sufficient for that period. Additionally, we’re introducing some Logs API billing changes so that you can pay to read application logs after the first 100MB. Reading from the Logs API will cost $0.12/gigabyte for additional data over the first 100MB.
  • Go SDK for Windows - We’ve published an experimental SDK for Windows for the Go runtime.
Don’t think these are all the new features we’ve introduced with 1.7.0; we’ve got so much more than just the highlights above. Make your way to our release notes for Java, Python, and Go straightaway to read about 1.7.0. If you have any feedback, we’d love to hear it in our Google Group. We and the whole Stack Overflow community for App Engine have been working hard to answer all your technical questions on the App Engine platform.

Not very exciting news, IMO.


My (@rogerjenn) Cloud Platform Sessions at Google’s I/O Conference in San Francisco 6/27 through 6/29/2012 post includes a list of App Engine sesssions:

imageGoogle is holding their annual I/O Conference at San Francisco’s Moscone Hall on 6/27 through 6/29/2012. Here’s the schedule of sessions related to App Engine in the Cloud Platform track:

image

Rumor has it that Google will announce it’s entry into the Infrastructure as a Service (IaaS) cloud provider sweepstakes during Wednesday’s keynote, which will be streamed live starting at 9:30 AM on 6/27/2012.


Check out Barb Darrow’s (@gigabarb) Google App Engine: What developers want at Google I/O post of 6/26/2012 to GigaOm’s Structure blog, which begins:

image_thumbMost of the noise coming out of Google I/O this week will be around the company’s long-percolating infrastructure as a service plan. But many developers who have banked on Google App Engine, the company’s platform as a service, will be looking for other things.

image_thumb[2]For many, Google App Engine has seemed a sideline for the big search company, a perception some Google execs have labored to correct. Google claimed 150,000 active GAE developers going into the show, a number it will doubtless update.

Barb continues with a nine-point developer wish list.


The SD Times (@sdtimes) Editorial Board posted From the Editors: And the developers shall lead them regarding lack of developer-oriented content at TechEd North America 2012 on 6/29/2012:

imageMicrosoft has always understood that to win a platform war, you must engage the developer community. More than engage: Energize. Empower. Aggressive support for developers, through great tools, outstanding technical guidance and marketing assistance, propelled Windows past OS/2 so many years ago. It’s how Microsoft has remained the desktop leader for so many years.

But the world has changed, and in important new spaces (smartphones, tablets and the cloud), Microsoft clearly is lagging. Not only are consumers voting for non-Microsoft products, but developers are as well.

Take phones. Apple and Google, with iOS and Android, have raced out of the starting gate and left Microsoft plodding, well up the track and way off the pace of smartphones and tablets. Microsoft will need a game-changer here to make headway, as the number of apps available on the other platforms far exceeds apps available for Windows Phones, which has finally reached 100,000. But it’s more than quantity. Many popular apps simply aren’t on Windows Phone 7.5.

Developers clearly are elsewhere. Meanwhile, Amazon and Heroku, among others, are off and running into the cloud, with applications written for those platforms already deployed and working. [*]

imageMicrosoft, of course, has responded. At the recent TechEd conference, high-level executives gave presentations for Windows Azure and Windows Server 12, claiming the two make up what the company is calling “the cloud OS.” Windows 8, the new operating system that brings Metro app styling to the desktop and tablet, was demoed on a Samsung device and looked as if it will be competitive in the tablet arena.

(Since then, of course, Microsoft unveiled its own tablet, called Surface, marking the company’s move into hardware as well as software. Surprisingly, it didn’t make the announcement four days earlier, when more than 10,000 dedicated Microsoft administrators and developers were gathered in Orlando, waiting for some kind of direction.)

What struck us most about TechEd was the lack of meat for developers, and we’re not just talking about the assembly-line lunches.

In an interview with Visual Studio honcho Jason Zander after the second-day keynote, we were told that among the biggest takeaways for developers was the news that LightSwitch, a RAD tool for Web development, now supports HTML5.

That’s it?

We even got a chuckle from the fellow who walked up to the SD Times booth at TechEd, noticed the June issue of the magazine, and shouted delightedly, “At last! I’ve found something here for developers.”

Microsoft should not lose sight of “who brung ‘em” to the big dance. If developers aren’t writing applications for the company’s phones, tablets, laptops and desktops, there’s nothing for end users to use, and whenever possible they will pick up other devices.

The software giant still has an opportunity to compete. The Surface device has generated a good deal of early buzz and even drove the stock price up for a day.

But will that displace the iPad?

The Windows Phone situation is murkier. To win against iOS and Android (and, we suppose, BlackBerry 10), Microsoft will need the ISV Army working overtime to create the compelling apps that will power Metro tiles and take advantage of all that back-end cloud connectivity displayed at TechEd.

Finally: Two big unknowns are Windows 8 and Internet Explorer 10. Will developers embrace the new version of Windows? And will they customize their websites for Microsoft’s latest browser? Much depends on whether Microsoft can energize them. Based on what we saw in TechEd, we are not optimistic. …

Read more on another topic, “Activist Judges.”


 

Steve Plank (@plankytronixx) posted his Video: Cloud Services Talk from UK’s Cloud Day: London, 22nd June on 6/29/2012:

imageAnybody who was there knows the demo in the talk I gave timed out. So I’ve done it again and added it to the talk using the magic of video editing!

With all the excitement around the IaaS features such as Virtual Machines and Virtual Network, and then the added cool of Azure Websites, it seemed to me the very basis on which Windows Azure had made its success so far – as a PaaS platform was left behind.

imageIn this talk, I show how to get started writing apps, using the local compute emulator and then deploying them to staging and eventually on to production. I use VS2012/Windows 8, but the principles are the same for VS2010 and Windows 7. I show how to code up multi-instance web role and worker role app that uses blob storage and load balances to the back-end using queues.

image

Click here for the [lengthy, 01:59:59] video.


Rob Enderle (@Enderle) asserted Microsoft’s Azure Hybrid Cloud offering appears promising, especially for businesses struggling with the complication of Cloud deployment in a deck for his TechEd and Windows Azure: Suddenly the Clouds Open.... article of 6/12/2012 for Datamation:

imageSuddenly the clouds opened and angels sang….at least that was the sense I got at the TechEd Developer’s conference last week.

You see, one of the huge problems IT has been having is users breaking out their credit card and using it to get hosted services from non-compliant vendors for critical line projects. The IT-approved and certified services are simply too difficult to get and set up so the users just dance around them and go web shopping.

imageWell, Microsoft got IT’s attention when they launched their Azure Hybrid Cloud offering because it appears to not only be better certified than other services and more easily integrated, it is also easier to setup and use.

In fact one story I was told at the show was about a bunch of Linux folks working on a major collaborative project using this service and having it fully provisioned and running in under 15 minutes. I was also told I couldn’t write about it in detail because none of them wanted to get hate mail. But when hard-core Linux folks start to prefer a Microsoft service this becomes a “man bites dog” story and vastly more interesting.

The Service

Currently in beta mode, this service consists of Windows Azure Virtual Machines, Windows Azure Virtual Network, Windows Azure Web Sites, and – omg – that’s enough Azure already. It has a web front end, critical for a tool like this, and it covers a variety of popular tools like Word Press and a large number of platforms, including an impressive selection of Linux distributions. Not Red Hat, though they are trying to work out an agreement with them. (Red Hat and Microsoft haven’t historically been the best of buddies).

Whether it is setting up a web site or extending a premise resource into the cloud because of emergency capacity issues, setup and use is amazingly easy. I say that because I haven’t done this kind of work for years and I found I could figure out most things very quickly (guys don’t like to ask questions and we have really short attention spans at my age). I’ve actually seen games that were harder to set up and play.

Now the first ten instances are free, though these are shared so performance could be iffy and eventually there may be a minor charge for this entry service (that hasn’t been worked out yet). And you can easily provision for extra resources and services (with a fee) if you need more. But for small projects, the kind that employees are currently using credit cards for, this could be ideal and certainly worth checking out.

There is an interesting case study on the Azure-based Harry Potter site and if you are interested in security and compliance the details for that are available. Some of the audit and certification standards that have been met are ISO/IEC 27001:2005, and SSAE 16/ISAE 3402 Attestation, for example.

Wrapping Up: And Angels Sang

The positive reaction this service got was impressive, for a Microsoft event, it was almost as if the audience suddenly heard angels singing. As noted, this service is still in beta. But for a bunch of IT folks struggling to avoid major compliance problems associated with employees bypassing IT, the offering of an approved, secure, and easy to use service – that works with their Microsoft infrastructure – must have been like words from heaven.

Surprisingly enough, a bunch of folks were actually similarly excited about Windows 8 on tablets, which would address similar concerns about out of control and non-compliant platforms and services flowing into companies.

This may have been the most exciting TechEd in years, who knew? I guess sometimes Christmas comes in June.

Rob is one of the most experienced and trusted computer industry analysts. I remember his work during my use of MS-DOS 2.0 after my company migrated to Wintel from Commodore CBMs as business computers.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Red Hat, Inc. (@RedHatNews) asserted Red Hat Enterprise Linux Developer Program with Developer Suite Bridges Development Agility with Production Stability in an introduction to its Red Hat Delivers Advanced Tooling and Community Resources to Software Developers press release of 6/26/2012 (via BusinessWire):

imageRed Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced it has expanded its Red Hat Enterprise Linux Developer Program with enhancements to its Developer Suite, including a new toolset for software developers worldwide. Through the Red Hat Enterprise Linux Developer Suite, Red Hat delivers the latest, stable open source developer tool versions at an accelerated cadence than that of Red Hat Enterprise Linux. Developers now have access to a robust suite of tools with synchronized availability on Red Hat Enterprise Linux and Red Hat OpenShiftTM, allowing developers to deploy applications freely to either environment.

“For Linux programmers, having ready access to the latest, stable development tools is key to taking advantage of new Linux advancements,” said Jim Totton, vice president and general manager, Platform Business Unit, Red Hat, Inc. “The Red Hat Enterprise Linux Developer Program makes it easy for developers to access industry-leading developer tools, instructional resources and an ecosystem of experts to help Linux programmers maximize productivity in building great Red Hat Enterprise Linux applications.”

Designed for many types of Linux developers, including Independent Software Vendors (ISVs), software solution providers, Systems Integrators (SIs), enterprise, and government software developers, the Red Hat Enterprise Linux Developer Suite enhances developer productivity and improves time to deployment by providing affordable access and updates to essential development tools. The latest, stable tooling can be used to develop applications on Red Hat Enterprise Linux whether on-premise or off-premise in physical, virtual and cloud deployments, and on OpenShift, the leading open Platform-as-a-Service (PaaS).

The Red Hat Enterprise Linux Developer Suite includes:

  • Red Hat Enterprise Linux, variants, and related Add-On software for development use including Red Hat Enterprise Linux, High-Availability Add-On, Load Balancer Add-On, Resilient Storage Add-On, Scalable File System Add-On, High-Performance Network Add-On, Extended Update Support, and MRG Real Time and Smart Management Add-on.
  • Red Hat Enterprise Linux Developer toolset, a collection of development tools to create highly scalable applications. Delivered as part of the Developer Suite, Red Hat plans to accelerate the release cadence of these tools to deliver the latest, stable open source developer tool versions on a separate life cycle from Red Hat Enterprise Linux releases.

The first version of the Red Hat Enterprise Linux Developer Suite includes a toolset that makes developing Linux software applications faster and easier by allowing users to compile once and deploy to multiple versions of Red Hat Enterprise Linux. Using the developer toolset, software developers can now develop Linux applications using the latest C and C++ upstream tools. These tools include the latest GNU Compiler Collection (GCC 4.7) with support for C and C++; the latest version of the GNU Project Debugger (GDB 7.4) with improvements to aid the debugging of applications; and the GNU binutils collection of binary developer tools, version 2.22, for the creation and management of Linux applications.

“The velocity of development is as high today as it has ever been, which means that developers are putting a premium on a toolchain that is current from libraries to compiler,” said Stephen O’Grady, Principal Analyst with RedMonk. “With its expanded Red Hat Enterprise Linux Developer Program and toolset, Red Hat aims to provide developers with just that.”

The self-supported Red Hat Enterprise Linux Developer Suite and the Red Hat Enterprise Linux Developer Support Subscriptions are available immediately worldwide. Red Hat customers and partners can join the developer online user group on Red Hat’s award-winning customer portal to access the extensive knowledgebase and recommended practices.

Additional Resources:


David Linthicum (@DavidLinthicum) asserted “Up against overly complex services from Amazon.com, Microsoft, and Rackspace, Google could strike gold by simplifying” as a deck for his Google: The great hope for IaaS post of 6/26/2012 to InfoWorld’s Cloud Computing blog:

imageIt's almost a certainty that Google will announce an enhanced IaaS offering at its developer conference this week in San Francisco. Most industry analysts, and yours truly, have been expecting this move -- and hoping it would happen. It will build on Google's existing PaaS product and Google App Engine, as well as Google storage services.

imageThis is a sound decision on Google's part. It needs to provide an IaaS option that supports its popular PaaS offering to achieve parity with both Amazon Web Services and Microsoft's combo of Azure and Office 365. But it could have a benefit beyond the competitive landscape: It could help simplify the overly complex IaaS market.

If the Google offering is easier to use than existing IaaS wares, such as those provided by Amazon.com and Rackspace, Google may finally find a way to penetrate the large enterprise market that has largely pushed back on the use of public IaaS.

The horsepower of the Google brand name combined with an IaaS setup built more for line managers than developers could address a need that's unmet today: the ability to quickly provision storage and compute resources, as well as migrate to the resources and provide turnkey management.

Certainly, IaaS offerings from Amazon.com and Rackspace are powerful. But they're daunting for nongeeks. A less technical IaaS from Google could help small businesses that have few IaaS offerings they can afford and actually handle. And it could help enterprises adopt IaaS more quickly by getting IaaS out of the IT project queue and into local business units' laps.

Google could provide the path of least resistance to IaaS for both small businesses and enterprises. When the formal news hits later this week, that should be the yardstick by which to measure the offering.

I’m not sanguine about the prospect of business unit managers (BUMs?) setting up and using virtual machines in anyone’s cloud, although Microsoft’s new Windows Azure Management Portal makes its easier than it once was.


Jeff Shute, Mircea Oancea et al. presented F1 - The Fault-Tolerant Distributed RDBMS Supporting Google's Ad Business at the Association for Computing Machinery’s Special Interest Group on the Management of Data (SIGMOD) 2012 conference on 5/22/2012 (missed when presented):

imageF1 - A Hybrid Database combining the:
● Scalability of Bigtable
● Usability and functionality of SQL databases

Key Ideas:
● Scalability: Auto-sharded storage
● Availability & Consistency: Synchronous replication
● High commit latency: Can be hidden:
○ Hierarchical schema
○ Protocol buffer column types
○ Efficient client code

Can you have a scalable database without going NoSQL? Yes.

image

Summary:

We've moved a large and critical application suite from MySQL to F1.

This gave us:
● Better scalability
● Better availability
● Equivalent consistency guarantees
● Equally powerful SQL query

And also similar application latency, using:
● Coarser schema with rich column types
● Smarter client coding patterns

In short, we made our database scale and didn't lose any key database features along the way.

F1 replaces sharded MySQL and enables parallel reads with SQL or MapReduce. Should we expect Google App Engine to deliver F1 as a service?


<Return to section navigation list>