Saturday, August 20, 2011

Windows Azure and Cloud Computing Posts for 8/18/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Update 8/20/2011 8:30 AM PDT Added link to source code for Brent Stineman’s A page at a time (Page Blobs–Year of Azure Week 7) post in the Azure Blob, Drive, Table and Queue Services section below.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

Brent Stineman (@BrentCodeMonkey) continued his “Year of Azure” series with A page at a time (Page Blobs–Year of Azure Week 7) on 8/18/2011:

image_thumbGoing to make this quick. I’m sitting in SeaTac airport in Seattle enjoying the free wifi and need to knock this out quickly as we’re going to start boarding in about 10-15 minutes. I’ve been here in Seattle doing some azure training and working with a client that’s trying to move to Azure when my topic for this week fell in my lap.

imageOne of the attendees at my training wanted to see an example of doing page blobs. I poked around and couldn’t find one that I liked to I decided I’d come up with one. Then, later in the afternoon we had a discussion about an aspect of their client and the idea of the random access abilities of page blobs came to mind. So while I haven’t had a chance to fully prove our my idea yet, I do want to share the first part of it with you.

The sample below focuses on how to take a stream, divide it into chunks, and write those to an Azure Storage page blob. Now, in the same [way] I keep each write to storage at 512 bytes (the size of a page), but you could use any multiple of 512. I just wanted to be able to demonstrate the chunk/reassemble process.

We start off by setting up the account, and creating a file stream that we’ll write to Azure blob storage:

MemoryStream streams = new MemoryStream(); 
// create storage account
var account = CloudStorageAccount.DevelopmentStorageAccount;
// create blob client
CloudBlobClient blobStorage = account.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
container.CreateIfNotExist(); // adding this for safety
string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
imgs.Save(streams, ImageFormat.Jpeg);
You may remember this code from the block blob samples I did a month or two back.

Next up, I need to create the page blob:

CloudPageBlob pageBlob = container.GetPageBlobReference(uniqueBlobName);
pageBlob.Properties.ContentType = "image\\jpeg";
pageBlob.Metadata.Add("size", streams.Length.ToString());
pageBlob.Create(23552);

Notice that I’m setting it to a fixed size. This isn’t ideal, but in my case I know exactly what size the file I’m uploading is and this is about twice what I need. We’ll get to why I’ve done that later. The important part is that the size MUST be a multiple of 512. No partial pages allowed!

And finally, we write start reading my file stream into a byte array buffer, convert that buffer into a memory stream (I know there’s got to be a way to avoid this but I was in a hurry to write the code for this update), and writing each “page” to the page blob.

streams.Seek(0, SeekOrigin.Begin);
byte[] streambuffer = new byte[512]; 
int numBytesToRead = (int)streams.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to 10.
int n = streams.Read(streambuffer, 0, streambuffer.Length);
// The end of the file is reached.
if (n == 0)
break;
MemoryStream theMemStream = new MemoryStream();
theMemStream.Write(streambuffer, 0, streambuffer.Length);
theMemStream.Position = 0;
pageBlob.WritePages(theMemStream, numBytesRead);
numBytesRead += n;
numBytesToRead -= n;
}
Simple enough, and it works pretty well to boot! The one piece we’re missing however is the ability to shrink the page blob down to the actual minimum size I need. For that, we’re going to use the code snippet below:
Uri requestUri = pageBlob.Uri;
if (blobStorage.Credentials.NeedsTransformUri)
requestUri = new Uri(blobStorage.Credentials.TransformUri(requestUri.ToString())); 
HttpWebRequest request = BlobRequest.SetProperties(requestUri, 200,
pageBlob.Properties, null, 12288
);
blobStorage.Credentials.SignRequest(request);
using (WebResponse response = request.GetResponse())
{
// call succeeded
};
You’ll notice this is being done via a REST request directly to blob storage, resizing a blob isn’t supported via the storage client. I also need to give credit for this last snippet to the Azure Storage Team.

As I mentioned, I’m in a hurry and wanted to get this out before boarding. So you’ll need to wait until next week to see why I’m playing with this and hopefully the potential may excite you. Until then, I’ll try to refine the code a bit and get the entire solution posted online for you.

Until next time!

• Updated 8/20/2011 8:30 AM PDT: The downloadable source code is here.


<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket and OData

imageNo significant articles today.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Todd Hoff posted Paper: The Akamai Network - 61,000 servers, 1,000 networks, 70 countries to his High Scalability Blog on 8/18/2011:

Update: as of the end of Q2 2011, Akamai had 95,811 servers deployed globally.

imageAkamai is the CDN to the stars. It claims to deliver between 15 and 30 percent of all Web traffic, with major customers like Facebook, Twitter, Apple, and the US military. Traditionally quite secretive, we get a peak behind the curtain in this paper: The Akamai Network: A Platform for High-Performance Internet Applications by Erik Nygren, Ramesh Sitaraman, and Jennifer Sun.

Abstract:

Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications. In this paper, we give an overview of the components and capabilities of this large-scale distributed computing platform, and offer some insight into its architecture, design principles, operation, and management.

Delivering applications over the Internet is a bit like living in the Wild West, there are problems: Peering point congestion, Inefficient communications protocols, Inefficient routing protocols, Unreliable networks, Scalability, Application limitations and a slow rate of change adoption. A CDN is the White Hat trying to remove these obstacles for enterprise customers. They do this by creating a delivery network that is a virtual network over the existing Internet. The paper goes on to explain how they make this happen using edge networks and a sophisticated software infrastructure. With such a powerful underlying platform, Akamai is clearly Google-like in their ability to deliver products few others can hope to match.

Detailed and clearly written, it's well worth a read.

image

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Chris Czarnecki provided a third-party view of Windows Azure Toolkit For Social Games in an 8/18/2011 post to the Learning Tree blog:

imageAn interesting development in the Cloud Computing world has been the release of the Windows Azure Toolkit for Social Games. The success of companies such as Zynga in social gaming coupled with the predicted growth in revenues generated from social gaming over the next few years has raised the profile of this interesting area of Cloud Computing.

imageTo support the potential rapid scaling requirements of successful social games, in a cost effective manner, Cloud Computing lowers the entry point for any organisation or indeed individual(s) who have an idea and wish to implement this. To support this development the Azure Platform as a Service (PaaS) is an environment which enables games developers to focus on the product without having to worry about infrastructure and associated operational requirements. The release of the Azure Toolkit for Social Games further adds to the attraction of Azure as an environment for social games development. So what exactly does the toolkit provide?

The Toolkit comprises of three main components:

  1. Server API’s comprising a set of services for achievements, virtual goods, leaderboards, user accounts, notifications, virtual currencies. The API’s are JSON REST services and so can be accessed from many different device types.
  2. Test Client enabling the testing of the games without the full user experience. Commands can be sent to games as though coming from users for the purpose of test and development.
  3. Sample Game is a sample game that demonstrates the use of the zoolkit through a game named Tankster.

imageCurrently the toolkit is available for .NET and HTML 5 although support for other languages will be available in the future. The release of this toolkit is an interesting development from Microsoft which brings together three growth areas in computing: social networks, distributed gaming and Cloud Computing. It will be interesting to see how this toolkit is utilised and by whom.


Rob Tiffany (@RobTiffany) continued his series with a Consumerization of IT Collides with MEAP: Windows Phone > On Premise episode of 8/18/2011:

imageIn my Consumerization of IT Collides with MEAP article last week, I described how to connect a Windows 7 device to Microsoft’s Cloud servers in Azure. In this week’s scenario, I’ll use the picture below to illustrate how Windows Phone utilizes many of Gartner’s Critical Capabilities to connect to Microsoft’s On-Premise infrastructure:

As you can see from the picture above:

  1. For the Management Tools Critical Capability, Windows Phone uses Microsoft Exchange for On-Premise policy enforcement but has no private software distribution equivalent to System Center Configuration Manager 2007. Targeted and beta software distribution is supported through the Windows Phone Marketplace via Windows Live ID’s and deep links.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Windows Phone uses Visual Studio. The free Windows Phone SDK plugs into Visual Studio and provides developers with everything they need to build mobile applications. It even includes a Windows Phone emulator so developers don’t have to own a phone to develop apps.
  3. For the cross-platform Application Client Runtime Critical Capability, Windows Phone uses the Silverlight flavor of .NET for thick clients. For thin clients, it uses Internet Explorer 9 to provide HTML5 + CSS3 + ECMAScript5 capabilities. Offline storage is important to keep potentially disconnected mobile clients working and this is facilitated by SQL Server Compact + Isolated Storage for thick clients and Web Storage for thin clients.
  4. For the Security Critical Capability, Windows Phone provides security for 3rd party application data-at-rest via AES 256, data-in-transit via SSL, & Authorization/Authentication via Active Directory. Full device encryption or encryption of PIM/Email data is not supported.
  5. For the Enterprise Application Integration Tools Critical Capability, Windows Phone can reach out to servers directly via Web Services or indirectly via SQL Server or BizTalk using SSIS/Adapters to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol directly, via Reverse Proxy, or VPN is facilitated by ISA/TMG/UAG/IIS. Crosss-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls. Distributed caching to dramatically boost the performance of any client is provided by Windows Server AppFabric Caching.
  7. While the Hosting Critical Capability may not be as relevant in an on-premises scenario, Windows Azure Connect provides an IPSec-protected connection to the Cloud and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure.
  8. For the Packaged Mobile Apps or Components Critical Capability, Windows Phone runs cross-platform mobile apps include Office/Lync/IE/Outlook/Bing.

imageAs you can see, Windows Phone meets many of Gartner’s Critical Capabilities, but isn’t as strong as Windows 7 in areas of full-device security and device management.

Next week, I’ll cover how Windows Phone connects to the Cloud.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image222422222222No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

David Linthicum (@DavidLinthicum) asserted “Cloud computing performance varies more than you might think, but the price of consistency likely exceeds what you want to spend” as a deck for his Face the facts: Cloud performance isn't always stable article of 8/18/2011 for InfoWorld’s Cloud Computing blog:

imageAre cloud services slow? Or fast? Both, it turns out -- and that reality could cause unexpected problems if you rely on public clouds for part of your IT services and infrastructure.

imageClouds are multitenant no matter if they are PaaS, IaaS, or SaaS; that means more than one machine or user accesses the virtual and physical resources such as storage, processor, network, and memory of the cloud system simultaneously. Despite some very effective multitenant cloud systems, you typically can tell when another user is sharing those resources with you or your processes.

As a result, cloud services tend to have performance profiles that are variable in nature, depending on what goes on in that cloud at any particular moment. When I log performance on cloud-based processes -- some that are I/O intensive, some that are not -- I get results that vary randomly throughout the day. In fact, they appear to have the pattern of a very jittery process. Clearly, the program or system is struggling to obtain virtual resources that, in turn, struggle to obtain physical resources. Also, I suspect this "jitter" is not at all random, but based on the number of other processes or users sharing the same resources at that time.

Of course, in a cloud computing environment, you can spin up as many instances of computing resources as you need. As you use more instances, any variation in the performance of a single instance is masked by the sheer number of instances. Moreover, as you spin up instances, they typically reside in different physical machines, which also lessens resource contention and on average keeps payloads' performance on par with each other.

Still, the actual performance of your cloud system across many instances depends largely on how well it's been designed. Providers differ sigificantly in their cloud architecture and design prowess.

The variability in performance only becomes an issue when people have to suffer through an I/O-intensive and/or chatty application where inputs and screen writes are noticeably sporadic. Alternatively, it may happen when the performance varies more on the slow side, and large processes -- such as huge database transformations and writes that occur in daily runs -- don't take place at optimal times. But that's when people (users or IT admins) seem to most care about performance.

image

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

K. Scott Morrison (@KScottMorrison) offered “A series of questions designed to document the security controls a cloud provider has in place” as a deck for his Cloud Security Alliance Introduces The Security, Trust and Assurance Registry article of 8/18/2011:

imageAs a vendor of security products, I see a lot of Requests for Proposal (RFPs). More often than not these consist of an Excel spreadsheet with dozens—sometimes even hundreds—of questions ranging from how our products address business concerns to security minutia that only a high-geek can understand. RFPs are a lot of work for any vendor to respond to, but they are an important part of the selling process and we always take them seriously. RFPs are also a tremendous amount of work for the customer to prepare, so it’s not surprising that they vary greatly in sophistication.

imageI’ve always thought it would be nice if the SOA gateway space had a standardized set of basic questions that focused vendors and customers on the things that matter most in Governance, Risk and Compliance (GRC). In the cloud space, such a framework now exists. The Cloud Security Alliance (CSA) has introduced the Security, Trust and Assurance Registry (STAR), which is a series of questions designed to document the security controls a cloud provider has in place. IaaS, PaaS and SaaS cloud providers will self-assess their status and publish the results in the CSA’s centralized registry.

Providers report on their compliance with CSA best practices in two different ways. From the CSA STAR announcement:

1. The Consensus Assessments Initiative Questionnaire (CAIQ), which provides industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings. The questionnaire (CAIQ) provides a set of over 140 questions a cloud consumer and cloud auditor may wish to ask of a cloud provider. Providers may opt to submit a completed Consensus Assessments Initiative Questionnaire.
2. The Cloud Controls Matrix (CCM), which provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains. As a framework, the CSA CCM provides organizations with the needed structure, detail and clarity relating to information security tailored to the cloud industry. Providers may choose to submit a report documenting compliance with Cloud Controls Matrix.

The spreadsheets cover eleven control areas, each subdivided into a number of distinct control specifications. The control areas are:

  1. Compliance
  2. Data Governance
  3. Facility Security
  4. Human Resources
  5. Information Security
  6. Legal
  7. Operations Management
  8. Risk Management
  9. Release Management
  10. Resiliency
  11. Security Architecture

The CSA hopes that STAR will help to shorten purchasing cycles for cloud services because the assessment addresses many of the security concerns that users have today with the cloud. As with any benchmark, over time vendors will refine their product to do well against the test—and as with many benchmarks, this may be to the detriment of other important indicators. But this set of controls has been well thought through by the security professionals in the CSA community, so cramming for this test will be a positive step for security in the cloud.


Dave Asprey (@daveasprey, pictured below) posted a link to Encryption in the Public Cloud: Infoworld Webcast on 8/18/2011:

Encryption in the Public Cloud

imageListen in as Bob Bragdon discusses data privacy concerns with Dave Asprey, VP of Cloud Security at Trend Micro. Learn about the sixteen best practices you can implement today to secure your data within public cloud environments.

Sponsor: Trend Micro

via resources.infoworld.com

imageThis is actually an interesting discussion with real content but it's behind a very lightweight registration wall on InfoWorld's site so I can't link to it directly. Well, I can, but the link expires an hour later and I'm far too lazy to write a script to get around it. :)

Enjoy!


<Return to section navigation list>

Cloud Computing Events

No significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

David Strom reported Catbird Partners With VMware on vShield App in an 8/18/2011 post to the ReadWriteCloud blog:

imageCatbird, which already sells its own VM security and protection toolset called vSecurity, has entered into a partnership to OEM VMware's vShield App technology. Catbird will include it in a future version of its own software scheduled for later this year. vShield is an inline firewall for VM hypervisors, something missing from vSecurity.

catbird150.jpgEarlier, VMware acquired BlueLane, which is a different security product that does mostly packet filtering but not true stateful packet inspection, the core of most modern firewalls.

Confusingly, vShield has a number of different components in its family: the vShield App is the only part of the technology that VMware is entering into the OEM arrangement with Catbird. This piece protects the Web protocols and other applications. Trend Micro, for example, makes use of the vShield Endpoint piece.

catbird trust zones.png

This partnership helps Catbird gets a further leg up into this marketplace. With the addition of VMware vShield into vSecurity's Control Center, customers can dynamically control network access between VMs; Catbird's policy enforcer will include full VMware vShield controls and can enforce various compliance standards.

Their vSecurity product also has network access controls and VM intrusion detection and compliance policy management. Catbird also supports Citrix Xen hypervisors and will continue to do so, although of course the Xen edition will lack any vShield features.

For more information about these products, you can read our two-part series on VM protection products here.

Interestingly, the agreement between VMware and Catbird was signed several months ago and the engineering teams from both companies have been hard at work with the integration process.

 

No significant articles today.


<Return to section navigation list>

0 comments: