Wednesday, August 25, 2010

Windows Azure and Cloud Computing Posts for 8/25/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Jerry Huang asserted “Hard drive is a commodity. Whether it is IDE, EIDE or SATA, you buy the right one and plug it directly into your computer” as a preface to his Cloud Storage Becoming a Commodity post of 8/25/2010:

image Hard drive is a commodity. Whether it is IDE, EIDE or SATA, you buy the right one and you can plug it directly into your computer.

A commodity is a good for which there is demand, but which is supplied without qualitative differentiation across a market. Commodities are substances that come out of the earth and maintain roughly a universal price
Quote - Wikipedia

Cloud Storage is becoming a commodity. The pricing is becoming similar with the interface becoming similar too across cloud storage vendors.

There are three layers in the cloud storage stack:

  1. cloud storage software facing the end user;
  2. service providers providing the services
  3. the cloud storage vendor providing the solution to the service providers.

On the user front, more and more cloud storage software are available supporting more and more cloud storage vendors. For example, in 2008, Gladinet supported SkyDrive, Google Docs, Google Picasa and Amazon S3. In 2010, the list expands to include AT&T Synaptic Storage, Box.net, EMC Atmos Online, FTP, Google Docs, Google Apps, Google Storage For Developers, Mezeo, Nirvanix, Peer1 CloudOne, Windows Azure, WebDav and more. This trend enables the consumers to pick and choose which cloud storage services they need.

On the service providers front (service providers are the biggest cloud storage service providers),  they may not all create cloud storage services themselves. For example, AT&T and Peer1 are using EMC Atmos. Verizon and Planet are using Nirvanix. As time goes on, we will see the same provider using multiple backend solutions to satisfy different need. Also when service providers merge, the merged company may be using backend solutions from multiple vendors.

On the cloud storage vendor front, more and more are conforming to the Amazon S3 API. We saw Google Storage for Developers, Eucalyptus, Dunkel and Mezeo all creating S3 compatible APIs for their cloud storage solutions. On the other hand, Rackspace is pushing the OpenStack project. All are trying to create a unified interface.

All these are turning cloud storage into a commodity.

If Neo4j can provide Advanced Indexes Using Multiple Keys, as reported by Alex Popescu on 8/25/2010 in his myNoSQL blog, why can’t Azure Tables?

image There’s a prototype implementation of a new index which solves this (and some other issues as well, f.ex. indexing for relationships).

The code is at https://svn.neo4j.org/laboratory/components/lucene-index/ and it’s built and deployed over at http://m2.neo4j.org/org/neo4j/neo4j-lucene-index/

The new index isn’t compatible with the old one so you’ll have to index your data with the new index framework to be able to use it.

imageBefore you were only able to search by a single property.

Original title and link for this post: Neo4j: Advanced Indexes Using Multiple Keys (published on the NoSQL blog: myNoSQL)


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Alex James announced on 8/25/2010 the #odata Daily newspaper that’s updated every day:

imageA newspaper built from all the articles, blog posts, videos and photos shared on Twitter using the #OData hashtag.

image


Wayne Walter Berry (@WayneBerry) announced SQL Azure Service Update 4 in an 8/24/2010 post to the SQL Azure Team blog:

Service Update 4 is now live with database copy, improved help system, and deployment of Microsoft Project Code-Named “Houston” to multiple data centers.

  • image Support for database copy: Database copy allows you to make a real-time complete snapshot of your database into a different server in the data center. This new copy feature is the first step in backup support for SQL Azure, allowing you to get a complete backup of any SQL Azure database before making schema or database changes to the source database. The ability to snapshot a database easily is our top requested feature for SQL Azure, and goes above and beyond our database center replication to keep your data always available. The MSDN Documentation with more information is entitled: Copying Databases in SQL Azure.
  • imageAdditional MSDN Documentation: MSDN has created a new section called Development: How-to Topics (SQL Azure Database) which has links to information about how to perform common programming tasks with Microsoft SQL Azure Database.
  • Update on “Houston”: Microsoft project Microsoft Project Code-Named “Houston” (Houston) is a light weight web-based database management tool for SQL Azure. Houston, which runs on top of Windows Azure is now available in multiple datacenters reducing the latency between the application and your SQL Azure database.

Wayne continued with Backing Up Your SQL Azure Database Using Database Copy on 8/24/2010:

imageWith the release of Service Update 4 for SQL Azure you now have the ability to make a snapshot of your running database on SQL Azure. This allows you to quickly create a backup before you implement changes to your production database, or to create a test database that resembles your production database.

imageThe backup is performed in the SQL Azure datacenter using a transactional mechanism without downtime to the source database. The database is copied in full to a new database in the same datacenter. You can choose to copy to a different server (in the same data center) or the same server with a different database name.

A new database created from the copy process is transactionally consistent with the source database at the point in time when the copy completes. This means that the snapshot time is the end time of the copy, not the start time of the copy.

Getting Started

The Transact SQL looks like this:

CREATE DATABASE destination_database_name
    AS COPY OF [source_server_name.]source_database_name

To copy the Adventure Works database to the same server, I execute this:

CREATE DATABASE [AdvetureWorksBackup] AS COPY OF [AdventureWorksLTAZ2008R2]

This command must be execute when connected to the master database of the destination SQL Azure server.

Monitoring the Copy

You can monitor the currently copying database by querying a new dynamic managed view called sys.dm_database_copies.

An example query looks like this:

SELECT *
FROM sys.dm_database_copies

Here is my output from the Adventures Works copy above:

clip_image001

Permissions Required

When you copy a database to a different SQL Azure server, the exact same login/password executing the command must exist on the source server and destination server. The login must have db_owner permissions on the source server and dbmanager on the destination server. More about permissions can be found in the MSDN article: Copying Databases in SQL Azure.

One thing to note is that the server you copy your database to does not need to belong to the same service account. In fact you can give or transfer your database to a third party by using this database copy command. As long the user transferring the database has the correct permissions on the destination server and the login/password match you can transfer the database. I will show how to do this in a future blog post.

Why Copy to Another Server?

You will obtain the same resource allocation in the data center if you copy to the same server or a different server. Each server is just an endpoint – not a physical machine, see or blog post entitled: A Server Is Not a Machine for more details. So why copy to another server? There are two reasons:

  • You want the new database to have a different admin account in the SQL Azure portal than the destination database. This would be desirable if you are copying the database to testing server from a production database, where the testers owned the testing server and could create and drop database as they desired.
  • You want the new database to fall under a different service account for billing purposes.
Summary

More information about copying can be found in the MSDN article: Copying Databases in SQL Azure. Do you have questions, concerns, comments? Post them below and we will try to address them.

Wade Wegner and Zane Adam also chimed in on this topic.


Azret Botash posted End-User Report Designer – Viewing Reports (Part 2) on 8/24/2010 to the DevExpress blog:

imageIn part 1 we created an end-user report designer that can publish reports to a database using OData protocol. Now, let’s create a Silverlight application that can view the reports that we have published.

Report Service

First we’ll add a printing service to our ASP.NET host application.

Silverlight-enabled XtraReports Service

By default Silverlight-enabled XtraReports Service looks up the reports by type name, we want to override this behavior and load the report from the database.

protected override XtraReport CreateReport(
    string reportTypeName, 
    Dictionary<string, object> parameters) {
    
    try {
        using (List listService = new List()) {

            using (Session session 
                        = new Session(listService.GetDataLayer())) {

                Report report 
                    = session.GetObjectByKey<Report>(new Guid(reportTypeName));

                if (report == null) {
                    return Create404Report();
                }

                File file = report.File;

                if (file == null) {
                    return Create404Report();
                }

                XtraReport retVal = new XtraReport();

                using (MemoryStream stream = new MemoryStream(file.Binary)) {
                    retVal.LoadLayout(stream);
                    return retVal;
                }

            }
        }
    } catch (Exception e) {
        return Create500Report(e);
    }
    
}

Note: The assumption of the new CreateReport is that the reportTypeName is a report ID, a GUID.

Silveright Viewer

Inside the Silverlight, we’ll simply drop the DocumentPreview control on our page and load the report on page load. I have described this process here.

<dxp:DocumentPreview Name="documentPreview1"/>
void MainPage_Loaded(object sender, RoutedEventArgs e) {
    if (!HtmlPage.Document.QueryString.ContainsKey("id")) {
        return;
    }
    
    string reportId = HtmlPage.Document.QueryString["id"];

    if (string.IsNullOrWhiteSpace(reportId)) {
        return;
    }

    ReportPreviewModel model
                   = new ReportPreviewModel(
                        new Uri(App.Current.Host.Source.AbsoluteUri + "../../ReportService.svc").ToString());

    model.ReportTypeName = reportId;

    documentPreview1.Model = model;

    model.CreateDocument();
}

That’s it, we can now access our reports by a URL for example

http://localhost.:56844/Report.aspx?id=7750954ac4b749a2a41c70f419e741c2

Silverlight Report Viewer

Final Notes
  • You can download the complete sample here.
  • The included Web.config will be useful to you if you need to copy paste some settings.
  • You will also need DevExpress.Xpo.Services.10.1.dll this file is not included in the provided sample but you can download it from http://xpo.codeplex.com.
  • Get more information and more samples of OData Provider for XPO.


imageNot to be outdone by the #OData enthusiasts, there’s also an #sqlazure Daily newspaper based on Twitter posts with the #OData tag:

image

What’s strange is that I can’t find an #Azure or #WindowsAzure newspaper.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Vittorio Bertocci (@vibronet) posted Infographic: IPs, Protocols & Token Flavours in the August Labs release of ACS to his personal blog on 8/25/2010:

image The newest lab release of ACS shows some serious protocol muscle, covering (to my knowledge) more ground than anything else to date. ACS also does an excellent job in simplifying many scenarios that would traditionally require much more thinking & effort: as a result, it is very tempting to just think that any scenario falling in the Cartesian product of possible IPs, protocols, token types and application types can be easily tackled. Although that is true in principle, in reality there are uses and scenarios that are more natural and easier to implement. Discussions about this, in a form or another, are blossoming all over the place both internally and externally: as a visual person I think that a visual summary of the current situation can help to scope the problem and use the service more effectively. Here there’s my first attempt (click for bigger version).

ACS2Diagram1.0

I am fairly confident that this should be correct, I discussed it with Hervey, Todd and Erin, but there’s always the possibility that I misunderstood something.
There’s quite a lot of stuff in there, let me walk you through the various parts of the diagram.

image7The diagram is partitioned in 3 vertical disjointed regions: on the left there are all the identity providers you can use with ACS, on the right the applications that can trust ACS; and between them, there is ACS itself. On the borderline between ACS and your applications there are the three issuing endpoints offered by ACS: the WS-Federation endpoint, the WS-Trust endpoint and the OAuth WRAP one. Here I didn’t draw any of the ACS machinery, from the claim transformation engine to the list of RP endpoints; it’s enough to know that something happens to the claims in their journey from the IP to the ACS issuers.

The diagram is also subdivided in 3 horizontal regions, which represent the kind of apps that are best implemented using a given set of identity providers and/or protocols. The WS-Federation issuer is best suited for applications which are meant to be consumed via web browser; WS-Trust, and the OAuth WRAP profiles that ACS implements, are ideal for server to server communications; finally, WS-Trust is also suitable for cases in which the user is taking advantage of rich clients. This classification is one of the areas of maximum confusion, and likely source of controversy. Of course you can use WS-Federation without a browser (that’s what I do in SelfSTS), of course you can embed WS-Federation in a rich client and use a browser control to obtain tokens; however those require writing custom code, a very good grasp of what you are doing and the will to stretch things beyond intended usage, hence I am not covering those here.

Let’s backtrack through the diagram starting from the ACS issuer endpoints.

The WS-Federation endpoint is probably the one you are most familiar with; it’s the one you take advantage of in order to sign in your application by leveraging multiple identity providers. It’s also the one which allow you a no-code experience for the most common cases, thanks to the WIF SDK’s Add STS Reference wizard.
You can configure that endpoint to issue SAML1.1, SAML2 and SWT tokens. The latter can be useful for protocol transition scenarios, however remember that there’s no OOB support for the format.

The sources here are the ones you can see on the portal, and the ones that the ACS-generated home realm discovery page will offer you (if you opted in). Every IP will use its own protocol for authentication (Google and Yahoo use OpenID, Facebook uses Facebook Connect, ADFS2 uses whatever authentication system is active) but in the end your application will get a WS-Federation wresult with a transformed token. It should be noted that “ADFS2” does not strictly indicates an ADFS2 instance, anything that can do WS-Federation should be able to be used here.

The WS-Trust endpoint will issue tokens when presented with a token from a WS-Trust identity provider, that is to say an ADFS2 instance (or equivalent, per the earlier discussion). It will also issue tokens when invoked with username and password associated to a service identity, static credentials maintained directly in ACS.

The OAuth WRAP endpoint will issue SWT tokens when invoked with a service identity credential; it will also accept SAML assertions from a trusted WS-Trust IP, pretty much the ADFS2 integration scenario from V1. Note that the profiles supported by ACS are server to server: the username & password of a service identity are not user credentials, but the means through which a service authenticates with another (including cases in which the user does not even have a session in place).

That’s it, that should give you a feeling of the scope of what you can do with this release. I’ll probably add to this as things more forward. Have fun!


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Rinat Abdullin (@abdullin) explained how to Redirect TCP Connections in Windows Azure in this 8/25/2010 post:

image I've just published a quick and extremely simple open source project that shows how to redirect TCP connections from one IP address/port combination to another in Windows Azure. It is sometimes helpful, when dealing with SQL Azure, cloud workers, firewalls and the like.

image Lokad Tcp Tunnel for Windows Azure | Download

Usage is extremely simple:

  • Get the package.
  • Configure ServiceConfiguration to point to the target IP address/port you want to connect to (you can do this later in Azure Developer's Portal).
  • Upload the Deployment.cspkg with the config to the Azure and start them.
  • Connect to deployment.cloudapp.net:1001 as if it was IP:Port from the config.

If you are connecting to SQL Server this way (hosted in Azure or somewhere else), then the address have to specified like this in Sql Server Management Console (note the comma):

deployment.cloudapp.net,1001

Actual Azure Worker config settings should look similar to the ones below, when configuring TCP Routing towards SQL Server (note the 1433 port, that is the default one for SQL):

<ConfigurationSettings>
  <Setting name="Host" value="ip-of-your-SQL-server" />
  <Setting name="Port" value="1433" />
</ConfigurationSettings>

The project relies on rinetd to do the actual routing and demonstrates how to:

  • Bundle non .NET executable in Windows Azure Worker and run it.
  • Deal with service endpoints and pass them to the processes.
  • Use Cloud settings to configure the internal process.

Since core source code is extremely simple, I'll list it here:

var point = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Incoming"];
var host = RoleEnvironment.GetConfigurationSettingValue("Host");
var port = RoleEnvironment.GetConfigurationSettingValue("Port");

var tempFileName = Path.GetTempFileName();
var args = string.Format("0.0.0.0 {0} {1} {2}", point.IPEndpoint.Port, host, port)

File.WriteAllText(tempFileName, args);

var process = new Process
  {
    StartInfo =
      {
        UseShellExecute = false,
        RedirectStandardOutput = true,
        RedirectStandardError = true,
        CreateNoWindow = true,
        ErrorDialog = false,
        FileName = "rinetd.exe",
        WindowStyle = ProcessWindowStyle.Hidden,
        Arguments = "-c \"" + tempFileName + "\"",
      },
    EnableRaisingEvents = false
  };
process.Start();

process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();

Tcp Tunnel for Azure is shared by Lokad in hopes that it will save a few hours or a day to somebody.


Dilip Krishnan summarizes David Pallman’s Hidden Costs in the Cloud, Part 1: Driving the Gremlins Out of Your Windows Azure Billing post of 8/14/2010 in an 8/25/2010 post to the InfoQ blog:

In a recent post David Pallman takes a look at the hidden costs of moving to the cloud, specifically in the context of Azure.

image Cloud computing has real business benefits that can help the bottom line of most organizations. However, you may have heard about (or directly experienced) cases of sticker shock where actual costs were higher than expectations.

image“These costs aren’t really hidden, of course: it’s more that they’re overlooked, misunderstood, or underestimated.” He says, as he examines and identifies these commonly overlooked costs in cloud based solutions.

Hidden Cost #1: Dimensions of Pricing
image According to David the #1 source of surprises is not taking into account the various dimensions the provider services are metered. Every service utilized adds more facets the offering can be metered in terms of bandwidth, storage, transaction costs, service fees etc.

In effect, everything in the cloud is cheap but every kind of service represents an additional level of charge. To make it worse, as new features and services are added to the platform the number of billing considerations continues to increase.

He suggests consumers use service specific ROI calculators such as the Windows Azure TCO Calculator, Neudesic’s Azure ROI Calculator, or the official Windows Azure pricing information.

Hidden Cost #2: Bandwidth
”Bandwidth is often overlooked or underappreciated in estimating cloud computing charges.” he says, He suggests that we model the bandwidth usage using tools such as Fiddler to give an give a ballpark estimate based on key usage scenarios. One could also throttle bandwidth overages or model the architecture to provide the path of least traffic given any usage scenario.

Hidden Cost #3: Leaving the Faucet Running
He suggests we review the usage charges and billing often to avoid surprises at the end

Leaving an application deployed that you forgot about is a surefire way to get a surprising bill. Once you put applications or data into the cloud, they continue to cost you money, month after month, until such time as you remove them. It’s very easy to put something in the cloud and forget about it.

Hidden Cost #4: Compute Charges Are Not Based on Usage

Hidden Cost #6: A Suspended Application is a Billable Application

He emphasizes that if the application is not used or suspended it does not mean the billing charges do not apply. He urges users to check the billing policies.

Since the general message of cloud computing is consumption-based pricing, some people assume their hourly compute charges are based on how much their application is used. It’s not the case: hourly charges for compute time do not work that way in Windows Azure.

Hidden Cost #5: Staging Costs the Same as Production

Many have mistakenly concluded that only Production is billed for when in fact Production and Staging are both charged for, and at the same rates.

Use Staging as a temporary area and set policies that anything deployed there must be promoted to Production or shut down within a certain amount of time. Give someone the job of checking for forgotten Staging deployments and deleting them—or even better, automate this process.

Hidden Cost #7: Seeing Double

[Y]ou need a minimum of 2 servers per farm if you want the Windows Azure SLA to be upheld, which boils down to 3 9’s of availability. If you’re not aware of this, your estimates of hosting costs could be off by 100%!

Hidden Cost #8: Polling
Polling data in the cloud is a costly activity and incurs transaction fees. Very soon the costs could add up based on the quantity of polling. He suggests

Either find an alternative to polling, or do your polling in a way that is cost-efficient. There is an efficient way to implement polling using an algorithm that varies the sleep time between polls based on whether any data has been seen recently.

Hidden Cost #9: Unwanted Traffic and Denial of Service Attacks

He warns that unintended traffic in the form of DOS attacks or spiders etc. could increase traffic in unexpected ways. he suggests the best way to deal with such unintended charges is to audit the security of the application and provide measures of controls such as CAPTCHA’s.

If your application is hosted in the cloud, you may find it is being accessed by more than your intended user base. That can include curious or accidental web users, search engine spiders, and openly hostile denial of service attacks by hackers or competitors. What happens to your bandwidth charges if your web site or storage assets are being constantly accessed by a bot?

Hidden Cost #10: Management
Finally he states that as a result of all of these factors clouds come with an inherent cost to manage these services for efficiency in usage and consequently billing.

Regularly monitor the health of your applications. Regularly monitor your billing. Regularly review whether what’s in the cloud still needs to be in the cloud. Regularly monitor the amount of load on your applications. Adjust the size of your deployments to match load.

The cloud’s marvelous IT dollar efficiency is based on adjusting deployment larger or smaller to fit demand. This only works if you regularly perform monitoring and adjustment.

He concludes his article saying that its very possible that one might not get things right the first time around and to expect some experimentation; possibly get a assessment of the infrastructure and seek the guidance from experts.

Cloud computing is too valuable to pass by and too important to remain a diamond in the rough.

Be sure to check out the original post and do enrich the comments section with your experiences.

Also, check out David’s latest: Hidden Costs in the Cloud, Part 2: Windows Azure Bandwidth Charges of 8/21/2010.


Brent Stineman (@BrentCodeMonkey) finally posted his Windows Azure Diagnostics Part 2–Options, Options, Options analysis on 8/24/2010:

image It’s a hot and muggy Sunday here in Minnesota. So I’m sitting inside, writing this update while my wife and kids both get their bags ready for going back to school. Its hard to believe that summer is almost over already. Heck, I’ve barely moved my ‘68 Cutlass convertible this year. But enough about my social agenda.

After 4 months I’m finally getting back to my WAD series. Sorry for the delay folks. It hasn’t been far from my mind since I did part 1 back in April. But I’m back with a post that I hope you’ll enjoy. And I’ve taken some of the time to do testing, digging past the surface and in hopes of bringing you something new.

Diagnostic Buffers

imageIf you’ve read up on WAD at all, you’re probably read that there are several diagnostic data source that are collected by default. Something that’s not made real clear in the MSDN articles (and even in many other blogs and articles I read preparing for this), is that this information is NOT automatically persisted to Azure Storage.

So what’s happening is that these data sources are buffers that represent files stored in the local file system. The size of these buffers is governed by a property of the DiagnosticMonitorConfiguration settings, OverallQuotaInMB. This setting represents the total space on the VM that will used for the storage of all log file information. You can also set quotas for the various individual buffers the sum total of which should be no greater than the overall quota.

These buffers will continue to grow until their maximum quota is reached at which time the older entries will be aged off. Additionally, should your VM crash, you will likely lose any buffer information. So the important step is to make sure you have each of your buffers configured properly to persist the logs to Azure Storage in such a way that helps protect the information you are most interested in.

When running in the development fabric, you can actually see these buffers. Launch the development fabric UI and navigate to a role instance and right click it as seen below:

image

Poke around in there a bit and you’ll find the various file buffers I’ll be discussing later in this update.

If you’re curious about why this information isn’t automatically persisted, I’ve been told it was a conscious decision on the part of the Azure team. If all these sources were automatically persisted, the potential costs associated with Azure Storage could present an issue. So they erred on the side of caution.

Ok, with that said, its time to move onto configuring the individual data sources.

Windows Azure Diagnostic infrastructure Logs

Simply put, this data source is the chatter from the WAD processes, the role, and the Azure fabric. You can see it start up, configuration values being loaded and changed etc… This log is collected by default but like we just mentioned, but not persisted automatically. Like most data sources, configuring it is pretty straight forward. We start by grabbing the current diagnostic configuration in whatever manner suits you best (I covered a couple ways last time), giving us an instance of DiagnosticMonitorConfiguration that we can work with.

To adjust the WAD data source, we’re going work with DiagnosticInfrastructureLogs property which is of type BasicLogsBufferConfiguration. This allows us to adjust the following values:

BufferQuoteInMB – maximum size of this data source’s buffer

ScheduledTransferLogLevelFilter – this is the LogLevel threshold that is used to filter entries when entries are persisted to Azure storage.

ScheduledTransferPeriod – this TimeSpan value is the interval at which the log should be persisted to Azure Storage.

Admittedly, this isn’t a log you’re going to have call for very often, if ever. But I have to admit, when I looked it, it was kind of interesting to see more about what was going on under the covers when roles start up.

Windows Azure Logs

The next source that’s collected automatically is Azure Trace Listener messages. This data source is different from the previous because it only contains what you put into it. Since its based on trace listener, you have instrument your application to take advantage of this. Proper instrumentation of any cloud hosted application is something I consider a best practice.

Tracing is a topic so huge that considerable time can (and has) been expended to discuss it. You have switches, levels, etc… So rather then diving into that extensive topic, let me just link you to another source that does it exceedingly well.

However, I do want to touch on how get this buffer into Azure Storage. Using the Logs property DiagnosticMonitorConfiguration we again access an instance of the BasicLogsBufferConfiguration class, just like Azure Diagnostics Infrastructure logs, so the same properties are available. Set them as appropriate, save your configuration, and we’re good to go.

IIS Logs (web roles only)

The last data source that is collected by default, at least for web roles, are the IIS logs. These are a bit of an odd bird in that there’s no way to schedule a transfer or set a quota for these logs. I’m also not sure if their size counts against the overall quota. What is known is that if you do an on-demand transfer for ‘Logs’, this buffer will be copied to blob storage for you.

FREB Logs

Out next buffer, the Failed Request Event Buffering log or FREB, is closely related to the IIS Logs. It is of course the failed IIS requests. This web role only data source is configured by modifying the web.config file of your role, introducing the following section.

image

Unfortunately, my tests for how to extract these logs haven’t yet been completed as I write this. But as soon as I do, I’ll update this post with that information. But for the moment, my assumption is that once configured, an on-demand transfer will pull them in along with the IIS Logs.

Crash Dumps

Crash dumps, like the FREB logs, aren’t automatically collected or persisted. Again I believe that doing an on-demand transfer will copy them to storage, but I’m still trying to prove it. But configuring the capture of this data also requires a different step. Fortunately, it’s the easiest of all the logs in that its simply and on/off switch that doesn’t even require a reference to the current diagnostic configuration. As follows:

Microsoft.WindowsAzure.Diagnostics.CrashDumps.EnableCollection(true);

Windows Event Logs

Do I really need to say anything about these? Actually yes, namely that the security log… forget about it. Adding custom event types/categories? Not an option.  However, what we can do is gather from the other logs though a simple xpath statement as follows:

diagConfig.WindowsEventLog.DataSources.Add("System!*");

In addition to this, you can also filter the severity level.

Of course, the real challenge is formatting the xpath. Fortunately, the king of Azure evangelists, Steve Marx has a blog post that helps us out. At this point I’d probably go on to discuss how to gather these, but you know… Steve already does that. And it would be pretty presumptuous of me to think I know better then the almighty “SMARX”. Alright, enough sucking up… I see the barometer is dropping. So lets move on.

Performance Counters

We’re almost there. Now we’re down to performance counters. A topic most of us are likely familiar with. The catch is that as developers, you likely haven’t done much more than hear someone complain about them. Performance counters belong in the world of the infrastructure monitoring types. You know. Those folks that site behind closed doors with the projector aimed at a wall with scrolling graphs and numbers? If things start to go badly, a mysterious email shows up in the inbox of a business sponsor warning that a transaction took 10ms longer then it was supposed too. And the next thing you know, you’re called into an emergency meeting to find out what’s gone wrong.

Well guess what, mysterious switches in the server are no longer responsible for controlling these values. Now we can via the WAD as follows:

image

We create a new PerformanceCounterConfiguration, specific what we’re monitoring, and set a sample rate. Finally, we add that to the diagnostic configuration’s PerformanceCounters datasources and set the TimeSpan for the scheduled transfer. Be careful when adding though, because if you add the same counter twice, you’ll get twice the data. So check to see if it already exists before adding it.

Something important to note here, my example WON’T WORK. Because as of release of Azure Guest OS 1.2 (April of 2010), we need to use the specific versions of the performance counters or we won’t necessarily get results. So before you go trying this, get the right strings for the CounterSpecifier.

Custom Error Logs

*sigh* Finally! We’re at the end. But not so fast! I’ve actually saved the best for last.  How many of you have applications you may be considering moving to Azure? These likely already have complex file based logging in them and you’d rather not have to re-instrument them. Maybe you’re using a worker role to deploy an Apache instance and want to make sure you capture its non-Azure logs. Perhaps its just a matter of your having an application that captures data from another source and saves it to a file and you want a simply way to save those values into Azure storage without having to write more code.

image

Yeah! You have an option through WAD’s support for custom logs. They call them logs, but I don’t want you to think like that. Think of this option as your escape clause for any time there’s a file in the VM’s local file store that you want to capture and save to Azure Storage! And yes, I speak from experience here. I LOVE this option. Its my catch all. And the code snippit at the left shows how to configure a data source to capture a file. In this snippet, “AzureStorageContainerName” refers to a blob in Azure Storage that these files will be copied too. LogFilePath is of course where the file(s) I want to save are located.

Then we add it to the diagnostic configuration’s Directories data sources. So simply yet flexible! All that remains is to set a ScheduledTransferPeriod or do an on-demand transfer.

Yes, I’m done

Ok, I think that does it. This went on far longer then I had originally intended. I guess I just had more to say then I expected. My only regret here is that just when I’m getting some momentum going on this blog again.. I’m going to have to take some time away. I’ve got another Azure related project that needs my attention and is  unfortunately under NDA. Smile with tongue out

Once that is finished, I need to dive into preparing several presentations I’m giving in October concerning the Azure AppFabric. If I’m lucky, I’ll have time to share what I learn as I work on those presentations. Until then… stay thirsty my friends.


Return to section navigation list> 

VisualStudio LightSwitch

Tim Anderson (@timanderson) asked Visual Studio LightSwitch – model-driven architecture for the mainstream? on 8/25/2010:

image2I had a chat with Jay Schmelzer and  Doug Seven from the Visual Studio LightSwitch team. I asked about the release date – no news yet.

imageWhat else? Well, Schmelzer and Seven had read my earlier blog post so we discussed some of the things I speculated about. Windows Phone 7? Won’t be in the first release, they said, but maybe later.

What about generating other application types from the same model? Doug Seven comments:

The way we’ve architected LightSwitch does not preclude us from making changes .. it’s not currently on the plan to have different output formats, but if demand were high it’s feasible in the future.

I find this interesting, particularly given that the future of the business client is not clear right now. The popularity of Apple’s iPad and iPhone is a real and increasing deployment problem, for example. No Flash, no Silverlight, no Java, only HTML or native apps. The idea of simply selecting a different output format is compelling, especially when you put it together with the fast JIT-compiled JavaScript in modern web browsers. Of course support for multiple targets has long been the goal of model-driven architecture (remember PIM,PSM and PDM?) ; but in practice the concept of a cross-platform runtime has proved more workable.

There’s no sign of this in the product yet though, so it is idle speculation. There is another possible approach though, which is to build a LightSwitch application, and then build an alternative client, say in ASP.NET, that uses the same WCF RIA Services. Since Visual Studio is extensible, it will be fun to see if add-ins appear that exploit these possibilities.

I also asked about Mac support. It was as I expected – the team is firmly Windows-centric, despite Silverlight’s cross-platform capability. Schmelzer was under the impression that Silverlight on a Mac only works within the browser, though he added “I could be wrong”.

In fact, Silverlight out of browser already works on a Mac; the piece that doesn’t work is COM interop, which is not essential to LightSwitch other than for export to Excel. It should not be difficult to run a LightSwitch app out-of-browser on a Mac, just right-click a browser-hosted app and choose Install onto this computer, but Microsoft is marketing it as a tool for Windows desktop apps, or Web apps for any other client where Silverlight runs.

Finally I asked whether the making of LightSwitch had influenced the features of Silverlight or WCF RIA Services themselves. Apparently it did:

There are quite a few aspects of both Silveright 4 and RIA services that are in those products because we were building on them. We uncovered things that we needed to make it easier to build a business application with those technologies. We put quite a few changes into the Silverlight data grid.

said Schmelzer, who also mentioned performance optimizations for WCF RIA Services, especially with larger data sets, some of which will come in a future service pack. I think this is encouraging for those intending to use Silverlight for business applications.

There are many facets to LightSwitch. As a new low-end edition of Visual Studio it is not that interesting. As an effort to establish Silverlight as a business application platform, it may be significant. As an attempt to bring model-driven architecture to the mainstream, it is fascinating.

The caveat (and it is a big one) is that Microsoft’s track-record on modelling in Visual Studio is to embrace in one release and extinguish in the next. The company’s track-record on cross-platform is even worse. On balance it is unlikely that LightSwitch will fulfill its potential; but you never know.


Paul Peterson posted Microsoft LightSwitch – Customizing a Data Entity using the Designer on 8/24/2010:

imageIn my last post I created a really simple application that I can use to keep track of some simple customer information. I created a simple entity named Customers, and then created a couple of screens that allow me to manage the entities I create. What I’ll do next is take a closer look at this Customers entity, and see what can be done to it to better meet my needs.

image2This post is specifically about the “entity”, not the fields within the entity (yet). I want to first understand what I can do with the designer in the management of a single entity. Things like creating and editing the fields and their properties, as well as entity relationships, are all subjects for later posts. I don’t have enough time during my lunch hour to write an all inclusive post about that stuff, so I am going to create  smaller bite sized tidbits of information that you can print and read during your library break at work.

The Entity

My Customers entity contains some pretty simple information. Not really much there, and probably not nearly enough for what I want to eventually accomplish. None the less, it’s a start and enough to get me going and start exploring.

Here is a look at my Customers entity in LightSwitch

The Customers Table

The Customers Entity

Since my last post I’ve had some time to think about my Customers entity. I was thinking that, hey, this entity I created is really supposed to represent a business entity, not a bunch of entities. As such, it is probably a better idea to name my entity in a way that represents a single entity instead of many. So I rename my entity to just Customer.

One of the first things I notice is how LightSwitch automagically saw that I changed the name of my entity. In the Properties window of my (now renamed) Customer entity there is a property labeled Plural Name . Before I renamed the entity, this propery had a value of “CustomersSet”. When I changed the entity name to just “Customer”, LightSwitch saw this and automatically pluralized the name of the entity.

When it comes to naming your entities (tables) it is a good idea to follow a consistent naming convention for all entities. Everyone likes to use a different convention when naming anything in an application; it’s really a matter of personal choice. Personally, I like to name a table as if it is a single entity. Others like to name it in its plural form. Some conventions even include one, or a combination of, different casings (upper and/or lower case characters). Whatever you think is going to work for you is ultimately the “right” way, so don’t think that you have to do exactly what someone else says is the “right” way – like when I BBQ!

Just for kicks, I click the Start Debugging button. I want to see if changing the name of entity had any cascading effects to my application.

Cool! The application fired up just fine and let me do the same add and edit customers without barfing.

Properties

Looking further further at the property window for my Customer entity, I see some other properties I can try messing with.

Default Screen

In the earlier post I created a screen showing a list of customers, and a screen allows me to edit customers. These are not to be confused with Search Data nor Details screens. Details screens are screens that I can create to use instead of the default detail screens that LightRoom might present. If I had created a Details Screen for my Customer entity, the Default Screen dropdown would contain an item for the detail screen I created. For example, If I double clicked a record in my CustomerList screen, and I had created a Details Screen and selected it from the detail screen drop down box, that screen would open instead of the default LightSwitch detail edit screen. For now, I am not going to create a default screen. I’ll let LightSwitch present one on the fly as I need it.

Plural Name

As mentioned before, the Plural Name property is the name that is pluralized representation of my entity. The plural name is editable, and can be customer to whatever you want. Just remember to use something that will make sense to you. Oh, and it can’t contain any spaces or special characters.

Is Searchable

The Is Searchable property tells LightSwitch whether or not the data will be searchable from a screen. This applies to both the list screens, and the search screens that LightSwitch creates. You can still create a screen from the Search Data Screen template, however the search box will only be visible if the Is Searchable property is selected.

For example, here is my CustomerList screen with the Is Searchable property set to true on the Customer entity…

Is Searchable property of entity set to TRUE

Is Searchable property of entity set to TRUE

… and the same screen where the search box is not there because the Is Searchable property is not checked…

Is Searchable property of entity set to FALSE

Is Searchable property of entity set to FALSE

Name

This is what I talked about before. This is the name of the business entity.

Description

This is a description of the screen. It may help you better understand the purpose of the screen should you end up with a whole lot of them. I am not really sure where else in LightSwitch this property value is used.

Display Name

This display name is what will show up on top of some of the screens that are created for your entity . The default display name is the same value you first entered in as the entity name. If you like, you can give it a different name that may make more sense that what may have been used for the entity name.

Summary Property

An entity might be made up of a lot of different fields. There is likely a field that could be used as a summary for the entity. In my case, the CustomerName is going to be the Summary Property. If not otherwise defined, the value of the field defined as the summary property will show up on the screen where there is a list involved. Running my application I see that the value of the CustomerName field appears in the list on the CustomerList screen.

Conclusion

That’s it. Not that difficult to understand is it?

In my next post I am going to start playing with the properties if the fields themselves. This is where I am going to do some fun stuff like creating choice list for one of my fields.


Martin Heller asserted “Beta 1 of Microsoft's LightSwitch shows promise as an easy-to-use development tool, but it doesn't seem to know its audience” in a preface to his InfoWorld preview: Visual Studio LightSwitch chases app dev Holy Grail post of 8/23/2010 to InfoWorld’s Developer-World blog:

image One of the Holy Grails of application development has been to allow a businessperson to build his or her own application without needing a professional programmer. Over the years, numerous attempts at this goal have achieved varied levels of success. A few have survived; most have sunk into oblivion.

image2Microsoft's latest attempt at this is Visual Studio LightSwitch, now in its first beta test. LightSwitch uses several technologies to generate applications that connect with databases. It can run on a desktop or in a Web browser, and it can use up to three application layers: client tier, middle tier, and data access.

The technologies used are quite sophisticated. Silverlight 4.0 is a rich Internet application (RIA) environment that can display screens in a Web browser or on a desktop, and it hosts a subset of .Net Framework 4.0. WCF (Windows Communication Foundation) RIA services allow Silverlight applications to communicate. An entity-relationship model controls the data services. (See the LightSwitch architecture diagram below.)

LightSwitch screens run in three layers of objects. A screen object encapsulates the data and business logic. A screen layout defines the logical layout of objects on the screen. And a visual tree contains physical Silverlight controls bound to the logical objects in the layout.

In a conventional Silverlight application -- or in almost any conventional application built in Visual Studio -- the user works directly with the controls and layout and writes or generates a file that defines the visual layout and data bindings. For Silverlight and WPF (Windows Presentation Foundation), that file is in XAML. The Silverlight and WPF designers in Visual Studio 2010 offer two synchronized panes of XAML code and visuals.

LightSwitch can create two-tier and three-tier desktop and Web applications against SQL Server and other data sources using Silverlight 4 and WCF RIA Services.

Read more: Next page › 1, 2, 3,

Martin concluded page 2 of his post with the following observation:

The database capabilities of LightSwitch are impressive. An ad-hoc table designer generates SQL Server tables. Existing SQL Server databases can be imported selectively, with relations intact. Relations can be added to imported tables. Entities can be imposed on existing fields; for example, a text field holding a phone number can be treated as a Phone Number entity, which supplies a runtime editor that knows about country codes and area codes.

But in the database area, too, LightSwitch doesn't quite deliver. While it can map an integer field to a fixed pop-up list of meanings, it can't yet follow a relation and map the integer foreign key to the contents of the related table. In other words, I can tell LightSwitch that 0 is New York, 1 is London, and 2 is Tokyo, but I can't point it at a database table that lists the cities of the world.

I’m not certain how valuable an asset “a database table that lists the cities of the world” would be, but I’m certain that it’s possible to find a database that contains city and country information for three-letter airport codes and use it with LightSwitch. (LightSwitch has a screen for defining relationships between tables in multiple databases.) Mapping.com sells a database of 36,606 locations (459 IATA/FAA codes represent two locations) for $50. I’m trying to find a free one; otherwise, I’ll import and munge Wikipedia’s Airport Code list with Microsoft Excel or Access. An alternative is a list of US cities by ZIP code from the US Postal Service. Stay tuned for developments.


<Return to section navigation list> 

Windows Azure Infrastructure

Wilson Pais claimed Microsoft Invest[ing] $500 million on State of the Art Data Center in Brazil as reported in a 8/25/2010 post to the Near Shore Americas blog:

imageTotal investments in Microsoft’s (Nasdaq: MSFT) Brazil data center will reach US$500mn, the director of technology and innovation for Microsoft’s Chile division, Wilson Pais, told BNamericas.

Construction is well underway, Pais said, but was unable to disclose the center’s exact location. The facility will be up and running next year.

The executive also confirmed Microsoft’s intentions of constructing additional data centers in the region, but said the exact locations are still up in the air.

“There will be more than two in Latin America,” he said. “All the data centers are connected, and they all have Microsoft cloud computing infrastructure. Each data center represents an investment of roughly US$500mn. These are data centers the size of soccer stadiums.”

Pais emphasized that the company will make the final call regarding additional centers once demand reaches a certain level. Factors under analysis also include the quality of local network connectivity and country stability, while politics are being left by the wayside.

“If the consumption in Latin America turns out as we expect it will be, we will obviously need another data center in a short period of time,” he said.

BNamericas previously reported that Brazil and Mexico had garnered the most attention in Microsoft’s datacenter planning. Microsoft will need the centers to support its regional cloud computing offer, which already includes a range of enterprise products, from the recently launched Microsoft online services and Windows Azure to Windows Intune and Microsoft Dynamics CRM online.

Microsoft’s cloud services are now available in Brazil, Mexico, Chile, Colombia, Peru, Puerto Rico, Costa Rica and Trinidad & Tobago.

The company has more than 10 data centers worldwide, including three to four facilities in the US, three in Asia and another three to four in Europe, according to Pais.

I wonder when the new data center will start supporting the Windows Azure Platform.


Stuart J. Johnston reported Microsoft Cops to Cloud Computing Platform Outage to the Datamation blog on 8/25/2010:

imageMicrosoft is apologizing to customers for an outage that kept some of its cloud computing users from being able to access their enterprise applications for more than two hours on Monday.

image "On Aug. 23, from 5:30 a.m. [to] 7:45 a.m. PDT, some customers in North America experienced intermittent access to our datacenter. The outage was caused by a network issue that is now fully resolved, and service has returned to normal," a Microsoft (NASDAQ: MSFT) spokesperson said in an email to InternetNews.com.

Around 7 a.m. PDT on Monday, Microsoft sent out an Online Services Notification alert that said it was looking into "a performance issue which may impact connectivity to the North American data center." A second notification announced at around 8:45 a.m. that service had been restored to affected users.

"During the duration of the issue, customers were updated regularly via our normal communication channels. We sincerely apologize to our customers for any inconvenience this incident may have caused them," the Microsoft spokesperson added. The spokesperson declined to say where the affected datacenter is located.

Microsoft, like practically every major enterprise software vendor, has been preaching the benefits of the cloud. Last month, Microsoft touted its successful addition of cloud-based apps and environments to its better-known lineup of deployed software, such as Office and Windows. It also trumpeted client wins like Dow Chemical and Hyatt Hotels & Resorts, which it signed to deals for Microsoft's Business Productivity Online Suite (BPOS), a set of enterprise applications delivered via Microsoft's cloud infrastructure. The suite provides users with hosted versions of Exchange Online, SharePoint Online, Office Live Meeting, Exchange Hosted Services, and Office Communications Online.

As it turns out, however, Monday's outage impacted users of BPOS, which has emerged one of Microsoft's most popular cloud services.

Also impacted, according to the notification alert, were Microsoft's Online Services Administration Center, Sign In Application, My Company Portal, and Customer Portal.

Microsoft has not said how many users or customer companies were impacted by the outage.

However, Microsoft Server and Tools Division President Bob Muglia said in June at Microsoft's TechEd conference that it has signed up some "40 million paid users of Microsoft Online Services across 9000 business customers and more than 500 government entities."

The news also highlights one of the chief worries discouraging enterprise IT executives from shifting their infrastructures to the cloud: complete reliance on a third party to ensure application availability. Along with the periodic bouts of downtime suffered by Microsoft, Amazon, Google and a slew of other major and up-and-coming cloud players have experienced brief outages that had their cloud and software-as-a-service offerings offline for hours.

Stuart J. Johnston is a contributing writer at InternetNews.com.

According to mon.itor.us, my OakLeaf Systems Azure Table Services Sample Project, which runs as a single instance in the Southwest-US (San Antonio) data center, has had no downtime for the last three weeks (8/1 to 8/22/2010). Watch for my Uptime Report during the first week of September.

John P. Alioto continued his Categorizing the Cloud Azure series by asking in Part 2: Where does my business fit?:

image Last time we discussed various categorizations of the cloud and determined that there were three dimensions against which we could categorize a Cloud offering.  Those dimensions are Service Model, Deployment Model and Isolation Model.  Given that the cardinality of the dimensions is 3, 3, 2, we have 18 possible categorizations.  To answer the question “where does my business fit?” we can examine each of the 18 possible categorizations, the nature of the businesses that fall into each category and the Cloud offerings that one might find there.

imageFor each category, I’ve decided on a three-way breakdown:  the category, an example technology from Microsoft (and example technology from Others as available or more precisely as I understand them) and a description/analysis of the situation in each category.

NB: The technology lists are not intended to be comprehensive, but examples of each.  Also please note as the disclaimer says, this is my opinion and differ from the opinion of others or my Employer.

Before the breakdown (hopefully in my case), let’s look at a few interesting themes that will come up throughout these discussions.  The first theme is the trade-off between Control and Economy of Scale.  The second theme is defining when our requirements are met by a particular Cloud offering.  And the third is the old standby “do more with less”  None of these three themes is new -- the technology industry has been steeped in them for years and each is critical in making the best decision when it comes to Cloud offerings (or any other system selection procedure.)  Strangely, however, when these three themes are applied to the Cloud, the decisions can become confusing and appear new.  They are not.

Not New Theme #1: Control vs. EoS

It’s very interesting that this is sometimes presented as a new issues as it pertains to the Cloud.  I don’t see it that way.  Who among us is not familiar with this trade-off …

image

We know it, we love it, we work within its constrains all the time.  When we first approach a project, we can have a very high degree of control by building things ourselves or we can give up some of that control and buy an off the shelf solution.  Sometimes the decision is obvious (few, if any of us would consider building an OS or DBMS from scratch for example) whereas sometimes its not.

How do we go about making this decision in the situations when the right move is less than perfectly clear?  We follow a fairly tried-and-true procedure starting with a Requirements Matrix.  The Requirements Matrix consists of a list of functionality that we want the system to potentially have.

We then categorize the requirements by criticality (must have, should have, nice to have) and call this our Business Requirements Document (BRD).  We do a vendor/product/solution/service analysis (sometimes including an RFP and PoC) and at the end of all that, we decide which solution most meets our requirements.  (You will see I did not say meets most of our requirements.)  If a solution/product is adequate, we buy it; if none is adequate, we build it.  At least this is how it works in the healthy state.  (I’ve seen companies decided to build solutions themselves that I would never have.)

When applied to the Cloud, this discussion can somehow seem new.  But why is this tradeoff …

image

… any different than the last?  Why should we treat it any differently?

The answer is, we shouldn’t.  It’s the very same trade-off we were making with Buy vs. Build with the independent variable being On Premises versus Cloud and the dependent variable remaining Control vs. EoS.  The good news is that we can use the same process we used before to determine whether or not there is a product or solution out there that is adequate for our needs.  In fact, it is not an entirely new process, but instead fits nicely into our vendor/product/solution selection process.

Indeed, I will argue that is should be our default starting position going into selection.  If there is a Cloud solution/product that meets our needs, we should choose it.  Only in the case where there is no adequate Cloud offering should we consider hosting a solution ourselves.

Not New Theme #2: The First Love is Formative (or What is “Good Enough”?)

It is critical that solution adoption decisions for Cloud technologies are correct the first time.  The First Love, opening gambit, right off the bat, first pig out of the chute (insert your metaphor of choice here) experience for businesses in the Cloud have to be positive.  Because of where we sit as in industry in a Diffusion of Technology sense, a bad Cloud decision risks putting a business off the Cloud completely for a good long time.  We need the first Cloud love to be an experience that will create a cloud consumer for life.

Public enemy number one here is ignorance.  Do not go down a “good enough” road in ignorance.  Because I work for MS, I hear the question phrased this way all too often “what is BPOS?”  We must arm ourselves with information before we make Cloud decisions or advise our customers on same.  We must be aware of the breath and depth of Cloud offerings in order to make the correct decision on requirements being met and the correct amount of control to be ceded.

The Cloud is good news for businesses.  Let’s keep it that way.  We can all make our jobs easier by not setting ourselves up for a battle every time we go into a customer to talk about Cloud.

Not New Theme #3: The Goal

The Goal is three-fold.  If you’ve been a CIO like me, then you know the Goal by heart.  It’s the CIO’s mantra – say it with me!

  1. Control Costs!
  2. Increase Productivity!
  3. More Innovation!

Some might consider it the CIO’s curse because they can all three be wrapped up into once devilishly simple phrase:  “Do more with less!”  That’s a CIO’s job.  That’s your CIO’s job.  (Short digression here.  If you tend to be puzzled by executive decisions, look at them anew in that light and see if they still puzzle you.  You might still disagree, but at least the confusion might go away.)  Whether times are tough or times are booming, the pressure to do more with less never goes away.

If you’ve not had the joy of owning a budget, think of Theme #3 like this:  If you can pull costs out of operations, you can take that money and invest it back into your people.  That investment should increase innovation.  The increase in innovation should lower costs even further and the cycle continues.  But do not make the novice mistake of equating the three-fold Goal with a “cheaper is better” attitude.  You must always look at feasibility of solution versus Total Cost of Ownership.  Always remember, cheaper does not imply lower TCO.

Call it what you want, the CIO’s mantra, the fundamental theorem of IT, whatever, Theme #3 is the Goal.

The Categories:  Where does my Business Fit?

With those Themes in mind, let’s break down the categories of the Cloud and take a look at some of the offerings in each category.  I will combine categories as it makes sense to do so.  It’s not really important to break out each of the 18 categories and have a definitive product/offering in each. It’s adequate to determine where a business currently is and then look in general what direction that business will take moving to the Cloud.  As I mentioned last time, it’s important to consider the correct category or set of categories of the Cloud that are relevant for the business that you are (or that you are talking to) lest confusion ensue and clarity fade.

SaaS

Category: Multi-Tenant/Public/SaaS
MS Examples:  Bing, BPOS-S, Dynamics CRM Online, Windows Live (Office Web Apps, Mail, SkyDrive, many more)
Other Examples: Google, Gmail, Google Apps, SalesForce CRM
Description:

I chose this category first because it is the most recognizable facet of the Cloud.  We all use Bing and Live Mail/Calendar or whatever your SaaS stack of choice is every day.  And, all three of the Themes play here.

The name of the game in this category is economy of scale.  The goal is to push TCO as far down as possible.  Solutions in this category require the most give in terms of control and should offer the most get in terms of TCO.  This category is defined by pay-as-you-go (or free), self-service, centrally managed capabilities that scale massively.  In many cases, offerings in this category cannot reasonably be built as a one off and as such, there may be no Buy vs. Build decision to be made.

All business over the next ten years will use some capability in this category.

As we sunset our legacy, On Premises solutions , Multi-Tenant/Public/SaaS solutions should be first on our list as replacements.  Our goal should be to replace as many On Premises solutions as possible with offerings from this category.  The reason for this is the huge amount of cost savings potential that can be realized here.  Theme #3 looms absolutely gigantic here and is ignored at a business’ peril.

Category: Multi-Tenant/Private/SaaS & Multi-Tenant/Hybrid/SaaS
MS Examples: Line of Business Applications built on Windows Server AppFabric or an Azure Appliance
Other Examples: Other stacks such as IBM
Description:

I have seen it argued that Private/SaaS does not make sense.  I disagree.  If a business invests in a Private/PaaS solution and then builds software services for their business units or subsidiaries on that Private/PaaS, that is Private/SaaS.

We can clump most of these solutions together.  They are typically line of business applications offered internally by an IT department.  The IT departments working in this category may serve multiple business units or various subsidiary companies that for regulatory or security reasons are treated as separate Tenants.  These solutions can stay entirely behind the corporate firewall or reach out and integrate with other systems (sometimes in the Cloud).

These are the systems that we fall back to when there is no public Cloud offering available that meets our requirements matrix.  This has begun to happen less and less frequently with as the level of capabilities in Multi-Tenant/Public/SaaS has grown.  It’s very tough for a CIO to tell his employees that they can have a 100 MB mailbox at work and then have them go home and have 25 GB of storage for free in their SkyDrive.

Theme #1 plays here as often times increased configuration capabilities is sited by businesses that want to remain in this category.  A detailed and specific analysis of a Requirements Matrix can yield many features in this case that have been mislabeled as Must Haves.  Business need to start asking the question “We pay for all these levers we can pull and dials we can turn, but do we ever pull or dial them?  If not, why pay for them?”

Category: Dedicated/Public/SaaS
MS Examples: BPOS-D, BPOS-F
Other Examples:  Hosted Solutions, Hosted CRM
Description:

These are the solutions that are chosen in the presence of a “Good Reason”.  From last time, a Good Reason is as follows:

  • Compliance
  • Data Sovereignty
  • Residual Risk Reduction for high value business data

Because these solutions tend to be more costly and specialized, if requirements fall outside of the Good Reason category, look toward Multi-Tenant/Public/SaaS solutions.  Hosted SPS –> BPOS-S for example.

Category: Dedicated/Private/SaaS & Dedicated/Hybrid/SaaS
MS Examples: Line of Business Applications built on Windows Server AppFabric
Other Examples: Other stacks such as IBM or Oracle
Description:

These solutions are basically the same as the Multi-Tenant variety, but are served up from an IT department with only one Tenant.

PaaS

Category: Multi-Tenant/Public/PaaS
MS Examples:  Windows Azure
Other Examples: GAE, VMForce
Description:

PaaS moves the slider a bit more toward control, but still maintains the ability to realize much lower TCO.  Even the most optimized and dynamic data centers do not run compute at a cost of $0.12/hour.  Business taking advantage of this category have the benefits of pay-as-you go, self-service, elasticity, centralized management and near infinite scalability.

As discussed last time, the unit of deployment for a PaaS solution is at the service level.  The concern then becomes the ambient capabilities that are available to the services in terms of Storage, Compute, Data, Connectivity and Security.  These capabilities must be understood completely if a correct platform decision is to be made.  For example, NoSql can be the right way to go, but do not underestimate the value of relational data to an Enterprise.

Azure, for example, offers the following ambient capabilities to services …

image

Through the following offerings …

image

Thus, even though Azure offers less “levers to pull” than an On Premise deployment of Windows Server 2008 and AppFabric, it nonetheless offers a very complete stack for the development of services.

Multi-Tenant/Public/PaaS is very much the future of the Cloud (see my argument about IaaS below).  This is where business should target their service development and deployment by default, then fall back for the minority of services that will not work in this category.

Category: Multi-Tenant/Private/PaaS & Multi-Tenant/Hybrid/PaaS 
MS Examples: Azure Appliance or Windows Server AppFabric & System Center Virtual Machine Manager 2008 R2 Self Service Portal 2.0
Other Examples: GAE for Business? vCloud?
Description:

These solutions are for business that have Good Reasons but want the self-service, elasticity, centralized management and scalability of a Cloud solution.  They can buy a Private PaaS Cloud or build one.   The typical On Premises data center has only a few of the features of a Private PaaS Cloud.  It is generally not self-service, does not offer elasticity and is typically more difficult to manage all resulting in higher TCO.

Windows Server AppFabric along with System Center Virtual Machine Manager 2008 R2 Self Service Portal 2.0 (VMMSSP – say that 10x fast!) provides capabilities for for IT departments to build their own Private Cloud.  But, business can also purchase a pre-fab, pluggable, modular PaaS Private Cloud in the form of an Azure Appliance.  This really only makes sense when a business is at very large scale or has a Good Reason (generally Data Sovereignty).

So where does the lower TCO come from?  The answer is less in the economy of scale of a Public PaaS, but more in the efficiency and centralized management.  It’s a private data center, but it’s a better, more efficient private data center. 

Business in this category should make a concerted effort to segment the portions of their operations and/or data that can be moved to the Public Cloud.  Then they should construct or buy a Private Cloud to host the remaining operations or data.

The Dedicated versions of the Private/PaaS and Hybrid/PaaS are single Tenant versions of the Multi-Tenant/*/PaaS solutions.  The Dedicated version of the Public/PaaS is something along the lines of a Hosted xRM solution.  These solutions are fairly narrow and can likely be repositioned into or the Public/PaaS category.

IaaS

I need to make a caveat up front here: IaaS is not as interesting to me as SaaS and PaaS.  I don’t see as big an opportunity for IaaS to lower TCO as with PaaS and SaaS.  Instead of the complete and centrally managed functionality of SaaS or PaaS, with IaaS, a business rents metal.  Business have been doing that for years with hosting solutions and it has yet to change the world.  At the end of the day, it’s metal a business still needs to manage and all the inefficiencies inherent in that situation raise TCO.

Amazon EC2 has done a good job re-invigorating the notion of Utility Computing by adding the elasticity and self-service elements.  Amazon has also done a good job commoditizing the sale of extra compute hours in its data center.  It’s not a business model I fully understand, however.  They are driving the value of a compute hour down as far as it can go which is good for everyone except companies that sell compute hours!  As a result, Amazon can only get so far renting out spare cycles in their datacenters.  They are moving more toward a complete PaaS offering which says to me that even the poster child of IaaS sees PaaS as more interesting.  So at least I’m not alone! :)

But, in the interest of completeness …

Category: Multi-Tenant/Public/IaaS & Multi-Tenant/Hybrid/IaaS 
MS Examples:  N/A
Other Examples: Amazon EC2
Description:

Business in this category rent metal from another business.  They don’t rack and stack the machines, but they deploy and manage as if they were machines in their own data center.  They also buy and maintain licenses for software above the OS level.  The notions of elasticity and self-service differentiate an IaaS solution from a hosting solution. 

I do not put the Azure VM role in this category. I will not discuss why until more about that offering is officially announced, but suffice it to say that it does not belong here.

Category: Multi-Tenant/Private/IaaS
MS Examples:  Windows Server AppFabric & System Center Virtual Machine Manager 2008 R2 Self Service Portal 2.0
Other Examples: vSphere
Description:

Business in this category have significant investment in On Premises solutions and want to benefit from self-service, elastic, central managed capability.  Business in this category should be looking to move to a combination of Multi-Tenant/Hybrid/PaaS and Multi-Tenant/Hybrid/IaaS solutions to lower TCO as much as possible.

Category: Dedicate/Private/IaaS, Dedicated/Hybrid/IaaS, Dedicated/Public/IaaS
MS Examples:  Windows Server AppFabric & System Center Virtual Machine Manager 2008 R2 Self Service Portal 2.0
Other Examples: vSphere, Hosted Solutions
Description:

Some business in this category have realized the benefits of having someone else rack and stack their servers, but have not moved their traditional hosting solutions to Cloud solutions.  Others have a traditional On Premises deployment.  Business in this category should be looking to move to a combination of Multi-Tenant/Hybrid/PaaS and Multi-Tenant/Hybrid/IaaS solutions to lower TCO as much as possible.

Conclusion

Whew!  We made it!  This time, we established our Three Themes as they pertain to the Cloud and went into the nitty-gritty of each category of Cloud offering and the type of business that can be found there. As Cloud professionals, it is our responsibility to understand the categories and the themes as well as the Cloud offerings that apply in each category.

Putting the Cloud in the perspective of the Not New Themes makes it more approachable for businesses.  CIOs can use the same processes, make the same trade-offs and realize the same results that they are familiar with, but apply them to the Cloud.  That makes Cloud a part of the team and not a confusing, unclear outlier.  It helps bring analysis and decisions around the Cloud down to size.


Lori MacVittie (@lmacvittie) claimed Cloud is more likely to make an application deployment more – not less – complex, but the benefits are ultimately worth it as a preface to her Cloud + BPM = Business Process Scalability post of 8/25/2010 to F5’s DevCentral blog:

image I was a bit disconcerted by the notion put forward that cloud-based applications are somehow less complex than their non-cloud, non-virtualized predecessors. In reality, it’s the same application, after all, and the only thing that has really changed is the infrastructure and its complexity. Take BPM (Business Process Management) as an example. It was recently asserted on Twitter that cloud-based BPM “enables agility”, followed directly by the statement, “There’s no long rollout of a complex app.”

That statement should be followed by the question: “How, exactly, does cloud do anything to address the complexity of an application?” It still needs the same configuration, the same tweaks, the same integration work, the same testing. The only thing that changed is that physical deployment took less time, which is hardly the bulk of the time involved in rolling out an application anyway.

BPM applications themselves are not that complex. I spent more than six months of my life rolling out and implementing just about every BPM solution on the market. Trust me, deploying one of these babies is just the beginning of what can only be described as anything but a sanguine experience. In fact, I’d say very rarely is the actual deployment of any application difficult. Now getting it to work – and integrated with other systems and data sources – that’s a whole different ball game. And maybe that’s the disconnect – my definition of deployment is the installation and basic connectivity. Everything after that is customization and not intrinsic to the application but to its run-time behavior. Suffice to say cloud does absolutely nothing to change the integration and configuration and testing required to orchestrate business processes which, if you recall, is the purpose of BPM in the first place.

CLOUD MORE LIKELY TO INCREASE not DECREASE COMPLEXITY

WC_326775_39_imageIf truth be told, using a cloud-based BPM is likely to introduce more complexity due to the very nature of these beasties; they all about orchestration of business processes. In the post-SOA world this generally means the orchestration of a series of services (REST or SOAP or POX, choose your poison) designed to codify a well-described process through which a customer or employee or whomever might “walk” to complete some task. Spread that across a cloud in which you have very little control over the infrastructure, or need to integrate back into data center deployed systems; add in a healthy helping of dynamism in a system that relies on distributed services and voila! Greater complexity.

And agility? The agility benefit of BPM comes from the capability to rapidly change business processes, which actually comes from the fact that most modern BPM leverage SOA. Even allowing that the definition of “rollout” includes “doing something useful” I’d still say it’s little more than rainbows and unicorns. If it isn’t the politics of trying to figure out what the process actually is then it’s the integration nightmare of trying to make them all work together.

Devops, are you paying attention? Cause this is your future, if you aren’t already there. While automation systems and existing open source solutions like Chef and Puppet are certainly helpful, you’re only scratching the surface of what a full data center orchestration implementation is going to require.

ALIGNING IT with the BUSINESS

Lest you think I’m all “BPM in the cloud is useless” there is a definite benefit in leveraging the elastic scalability of both infrastructure and services with BPM. Consider that if you distill BPM down it’s really a sophisticated integration bus that guides a user through a specific process, like checking the status of an order. Thus there is a definitive set of entry and exit points, with a series of steps (activities) that occur to gather the data required and return an answer. It’s a service-oriented architecture that may or may not leverage SOAP/REST/XML; the key is individual services that make up a larger system, each of which should align nicely with a specific business activity.

imageEach activity is often represented by a service, whether it be via integration with another application, a remote API call, or a direct service interface to a data source. Each of these is generally an individual service, each with their own unique compute resource needs. Thus the scalability of a business process is directly impacted by the scalability (and availability) of each of those services, regardless of where they may be located. It is likely that one or two of the steps in a business process may be more computationally intense than others, and it is almost certainly the case that each service will have its own   scalability profile.

Therefore, cloud – with its notion of application scalability really being virtual machine scalability (or instance) at this point – is inherently well-positioned to enable the scalability of individual services within an overall business process composition. Each service can scale as necessary, insuring that the overall (business) process scales on-demand. Consider that a single business process may have two different entry points: one for customer service reps (CSRs) and one for users via a web interface. During the week the CSR entry point may need to scale on-demand to meet the inevitable Monday morning rush. But on the weekend it may be the web interface that needs scaling because there are no CSRs to be had. This pattern could be manually handled by devops changing the resources pools assigned to each activity on Friday to a weekend-profile and then back again early Monday morning to a weekday-profile, but cloud and automation offer the means by which this can be handled without manual intervention and prevents any “surprise” scalability demands from cropping up and driving availability (and customer satisfaction) down. It scales the process from a business perspective without incurring IT hours, which keeps costs down.

This aligns IT with the business, which is rarely so clear-cut as to show the value of an implementation as is evident with BPM and SOA.

IT IS STILL ABOUT the ARCHITECTURE

Ultimately the agility and reduction in complexity offered by cloud is tied to the infrastructure, and the alleged ease with which scale and enforcement of delivery policies can be applied at strategic points of control throughout the architecture. BPM, CRM, SFA, home-grown applications. None of these are necessarily made less complex by cloud computing and in fact governing the intricate relationships of such applications is made more difficult by cloud because of the dynamism inherent in the underlying network, application network, and server infrastructure upon which such environments are built.

But it does provide the opportunity to architect that infrastructure in such a way as to align technological capabilities with business needs and serve as a means to ensure business scalability through more efficient scalability of applications and infrastructure. Automation is necessary in these more complex environments to eliminate the increased risk of human error as a cause of downtime and performance impeding problems and to assist in realizing the benefits of more efficient use of compute resources.


Chris Czarnecki posted his Comparing PaaS and IaaS article to the Learning Tree blog on 8/25/2010:

image One of the most common questions I am asked when consulting or teaching Learning Tree’s Cloud Computing course is “What is the difference between Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). This is an excellent question that the cloud computing vendors do little to help clarify.

Let’s consider IaaS first…
As the name suggests, what is provided here is an infrastructure delivered as a service. This includes hardware (servers, networks, load balancers etc) and software (operating systems, databases, application servers etc). The largest provider of IaaS is Amazon AWS and they have a wide variety of hardware and software combinations to choose from.

Now lets consider PaaS…
What we are gaining here is a platform as a service. This includes hardware (servers, networks, load balancers etc) and software (operating systems, databases, application servers etc). There are a number of PaaS providers including Google App Engine, Microsoft Azure and Salesforce.com’s Force.com.

Is the difference clear now ?
I thought not. On the surface the feature set of both IaaS and PaaS are the same but delving a little further a major difference is the amount of control a user has over the service. Take for example Microsoft Azure. Using Azure, the user has no control over the operating system, security features or the ability to install software applications – other than your own applications developed specifically for Azure. The same can be said for Google App Engine and Force.com. All operating system updates, versions, patches, security etc are controlled and implemented by the PaaS vendor.

Now considering IaaS. With IaaS, the user selects a configuration which defines server size, operating system, application software etc and then has complete responsibility for the maintenance of the system. If an operating system upgrade is required – its your responsibility. A security patch – its your responsibility. Want to install a new application or a database – feel free, its your server.

So in summary…
A major difference between IaaS and PaaS is the amount of control over the system available to users of the services. IaaS provides total control, PaaS typically provides no control. This also means virtually zero administration costs for PaaS whereas IaaS has administration costs similar to a traditional computing infrastructure.

There are many other differences between IaaS and PaaS of course. It is these kind of things that we investigate and evaluate as well as provide hands-on experience of in the Learning Tree Cloud Computing course.


Peter Silva asked “Did you expect the level of technology we have today, 10 years ago?” as a preface to his Money Really Moving to the Cloud essay of 8/25/2010:

Ever wish you saved a blog title for that ‘perfect’ eye-catcher?  Well, I do.  Last week I wrote about some cloud surveys talking about how financial institutions are using cloud services titled: CloudFucius’ Money: Trickles to the Cloud.  I get to this week’s weekly entry and Ka-Chow (with apologies to Lightning McQueen) that would have been the perfect title to this entry.  Oh well, as to not be included in the Department of Redundancy Department Hall of Fame, I had to be a little creative with something slightly different.  And for all of you looking at the title and wondering, ‘Did this guy just copy his last blog entry?’  I can assure you, this is all new material so let’s get to it.

Konfuzius-1770Have you seen the news?  Cloud Computing Ranks High on Fujitsu’s M&A Shopping List; HP, Dell in Bidding War for Cloud Computing Provider; 6fusion is hiring after raising $3 million; and Nimbula raises $15M to expand cloud service.  I guess we’ve moved slightly past the ‘early adopter’ stage and right into the ‘gimmie more’ stage.  Throughout the CloudFucius series, we’ve tried to investigate the various surveys showing cloud computing movement and hindrances along with learning about areas we were not so knowledgeable. 

It’s almost following the same pattern as 26 Short Topics About Security where I filled the entries with stats, surveys, stories, suggestions and as Don MacVittie commented, ‘a link fest’ of articles.  I tried to present multiple sides of the story, especially with surveys virtually contradicting themselves when it comes to cloud computing.  They want it, they are hesitant; looking into it, waiting until it’s mature; cost saver, virtual sprawl; we’ve deployed, what the heck is it

What is intriguing to me, errr, CloudFucius, is that I had always thought – both my impression and what analysts have said – that cloud computing will never take over the world but is simply another option for IT with various benefits.  Right now, that’s exactly what it is.  With the announcements above, it sure seems like a lot of providers and investors feel that it’ll be a much larger force within the technology industry. 

Almost every technology company, including F5, are providing some sort of services that ‘play’ in the cloud.  Many of us have also been to trade shows where the vendor booth is touting some ‘cloud’ connection and you look at them and go, ‘huh?’  How does that ‘enable’ cloud computing?  ‘Ummm, we use the cloud to do this, that or the other thing.’

How will it all turn out?  Who knows at this point.  Did you expect the level of technology we have today, 10 years ago?  Did you expect RF chips in the underwear you are purchasing?  Did you expect common thieves going to a cloud to steal your info?  Did you think you’d be able to surf the net on an airplane?  Maybe we thought it *might* happen at some point, but we are living it now.  ♫ Meet George Jetson……

And one from Confucius: Fine words and an insinuating appearance are seldom associated with true virtue.

PS: The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18.

Peter covers security for F5’s Technical Marketing Team.


Douglas C. Schmidt and Ron Guida wrote Achieving Ultra High Performance in the Cloud for the HPC in the Cloud blog on 8/24/2010:

imageCompanies in competitive domains, such as financial services, digital media, text mining, and enterprise content management, create large data repositories containing large amounts of data collected from their daily operations. Analyzing this archived data can yield knowledge that drives future business and provides significant market advantages over competitors. Although companies could use specialized super computers, the custom development time and hardware costs are prohibitive.

Another approach is to use proprietary and custom-built high-performance computing (HPC) software platforms atop emerging cloud computing environments. This approach, however, has the following drawbacks:

•  Price-per-core performance is not tied to linear gains in application speedups or compute processing.  Conventional HPC software and hardware platforms are cost-prohibitive since they do not accelerate performance commensurately to the investment of resources allocated and do not adapt dynamically to changing workloads and resource availability.

• Custom development and integration. Conventional HPC software platforms require extensive manual development and integration of custom server and application programming before they can work (and many common and legacy apps cannot be modified unless they are redeveloped).

• Tied to modified apps. Once applications are customized, they are locked in to a particular HPC platform and deployment configuration, and cannot leverage updates without redoing the intense customization.

• Complex setup with no support for automated plug and play. Conventional HPC software platforms require complex setup and customization to adjust the load manually on all pro¬cessors in the network since they don’t have automatic adaptive load balancing.

What is needed, therefore, are solutions that can leve¬rage hardware and software innovations in distributed and parallel computing, while simultaneously reducing the learning curve and effort needed to incorporate these innovations into mission-critical applica¬tions running in cloud environments.  In particular, solutions are needed to map compute-intensive applications to high-performance cloud computing environments that provide the following capabilities.

Achieving Extreme—Yet Cost-Effective—HPC Cloud Performance

Cost-effective, HPC solutions for cloud environments should have the following features:

• Dynamic, adaptive, and real-time load management and equalization to utilize and distribute the workload in real-time across all available cloud computing and networking resources. This load equalization ensures every processor/core is near-optimally utilized  to maximize computing performance.

[Read more] Page:  1  of  4
1 | 2 | 3 | 4 All »

Dr. Douglas C. Schmidt is a Professor of Computer Science at Vanderbilt University. Ron Guida joined Zircon Computing in 2007 as Director of Worldwide Sales and Marketing.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

The HPC in the Cloud blog reported HP Institutes Efforts to Spur Private Cloud Adoption on 8/25/2010:

image HP today unveiled a Private Cloud Readiness Program consisting of self-assessment tools and a Cloud Boot Camp to be held during the VMworld conference next week at San Francisco’s Moscone Center.

image The company also announced a team of experts who, as pioneers in cloud computing, are driving future innovations and assisting clients with advanced cloud deployments.

Get ready to assess, plan, deploy

HP has designed a private cloud scorecard that enables companies to rate and rank key factors that help determine their “cloud readiness.” Questions range from rating internal knowledge of cloud concepts and services to existing cloud security capabilities. View the scorecard at www.hp.com/go/cloudassessment.

HP also is offering a Cloud Boot Camp for clients during VMworld to be held on Sept. 2 from 9 a.m. to 11 a.m. PT at the Westin Hotel (“Press Room”). Developed and conducted by technology experts, the boot camp will provide participants the knowledge needed to transform their infrastructures, applications and processes for the private cloud. VMworld attendees can register at the HP Cloud Assessment room from noon to 6 p.m. PT on Aug. 31 and Sept. 1.

There is also a Cloud Boot Camp and breakfast specifically for analysts and media. Details are below.

Meet the HP Cloud Advisors

With combined experience of nearly 200 years, the HP Cloud Advisors consist of the best technical and strategic minds in cloud computing. They offer a unique combination of vision and real-world experience, as well as a distinct point of view. Each was hand-picked based on the innovations they influenced, their knowledge and their expertise. The HP Cloud Advisors are:

    * Nigel Cook is an HP technology director and strategist. He is an integral part of the leadership team spearheading the DMTF Cloud Management Working Group. He was previously part of the DMTF Cloud Incubator, formulating technology submissions, requirements and use cases for interoperability between an enterprise data center and Infrastructure-as-a-Service clouds.

    * A noted cloud computing expert, Jamie Erbes is chief technology officer for the Software and Solutions business at HP. She is responsible for driving the company’s strategy for IT management software. Follow Erbes on Twitter.

[Read more] Page:  1  of  4
1 | 2 | 3 | 4 All »


<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) casts a jaundiced eye on Verizon’s PCI-comliance claim in his Dear Verizon Business: I Have Some Questions About Your PCI-Compliant Cloud… post of 8/24/2010:

imageYou’ll forgive my impertinence, but the last time I saw a similar claim of a PCI compliant Cloud offering, it turned out rather anti-climatically for RackSpace/Mosso, so I just want to make sure I understand what is really being said.  I may be mixing things up in asking my questions, so hopefully someone can shed some light.

This press release announces that:

“…Verizon’s On-Demand Cloud Computing Solution First to Achieve PCI Compliance” and the company’s cloud computing solution called Computing as a Service (CaaS) which is “…delivered from Verizon cloud centers in the U.S. and Europe, is the first cloud-based solution to successfully complete the Payment Card Industry Data Security Standard (PCI DSS) audit for storing, processing and transmitting credit card information.”

It’s unclear to me (at least) what’s considered in scope and what level/type of PCI certification we’re talking about here since it doesn’t appear that the underlying offering itself is merchant or transactional in nature, but rather Verizon is operating as a service provider that stores, processes, and transmits cardholder data on behalf of another entity.

Here’s what the article says about what Verizon undertook for DSS validation:

To become PCI DSS-validated, Verizon CaaS underwent a comprehensive third-party examination of its policies, procedures and technical systems, as well as an on-site assessment and systemwide vulnerability scan.

I’m interested in the underlying mechanicals of the CaaS offering.  Specifically, it would appear that the platform – compute, network, and storage — are virtualized.  What is unclear is if the [physical] resources allocated to a customer are dedicated or shared (multi-tenant,) regardless of virtualization.

According to this article in The Register (dated 2009,) the infrastructure is composed like this:

The CaaS offering from Verizon takes x64 server from Hewlett-Packard and slaps VMware’s ESX Server hypervisor and Red Hat Enterprise Linux instances atop it, allowing customers to set up and manage virtualized RHEL partitions and their applications. Based on the customer portal screen shots, the CaaS service also supports Microsoft’s Windows Server 2003 operating system.

Some details emerge from the Verizon website that describes the environment more:

Every virtual farm comes securely bundled with a virtual load balancer, a virtual firewall, and defined network space. Once the farm is designed, built, and named – all in a matter of minutes through the CaaS Customer Management Portal – you can then choose whether you want to manage the servers in-house or have us manage them for you.

If the customer chooses to manage the “servers…in-house (sic)” is the customer’s network, staff and practices now in-scope as part of Verizon’s CaaS validation? Where does the line start/stop?

I’m very interested in the virtual load balancer (Zeus ZXTM perhaps?) and the virtual firewall (vShield? Altor? Reflex? VMsafe-API enabled Virtual Appliance?)  What about other controls (preventitive or detective such as IDS, IPS, AV, etc.)

The reason for my interest is how, if these resources are indeed shared, they are partitioned/configured and kept isolated especially in light of the fact that:

Customers have the flexibility to connect to their CaaS environment through our global IP backbone or by leveraging the Verizon Private IP network (our Layer 3 MPLS VPN) for secure communication with mission critical and back office systems.

It’s clear that Verizon has no dominion over what’s contained in the VM’s atop the hypervisor, but what about the network to which these virtualized compute resources are connected?

So for me, all this all comes down to scope. I’m trying to figure out what is actually included in this certification, what components in the stack were audited and how.  It’s not clear I’m going to get answers, but I thought I’d ask any way.

Oh, by the way, transparency and auditability would be swell for an environment such as this. How about CloudAudit? We even have a PCI DSS CompliancePack.

Question for my QSA peeps: Are service providers required to also adhere to sections like 6.6 (WAF/Binary analysis) of their offerings even if they are not acting as a merchant?

/Hoff

I have similar misgivings and reservations about Verizon’s PCI claims.


Matthew Weinberger posted Novell Launches Cloud Security Service for MSPs to the MSPMentor blog on 8/24/2010:

Today marks the launch of the Novell Cloud Security Service, designed to give MSPs, hosting companies and cloud providers the ability to deliver compliance and secure access for their customers’ applications. Novell says the multi-tenant access and identity management solution is the company’s first cloud offering built exclusively for partners. Here’s the scoop.

image The core concept, says Novell Senior Solutions Marketing Manager Anita Moorthy, is to give end-customers the peace of mind of knowing that their access to SaaS applications is fully managed and compliant at all times.

Where do partners fit in? Novell Cloud Security Service is hosted with the partner, not with Novell, so MSPs can offer customized security that extends customers’ existing identity management infrastructure into the cloud.

As far as the billing model goes, Novell Director of Partner Marketing Dan DuFault says that it’s “pay by the drink,” which is to say that the MSP or hosting firm can track customer usage and bill whatever they deem appropriate, and then Novell gets a cut. DuFault says it’s designed to give their cloud partners an easy way to ride SaaS momentum.

Novell’s press release marks the new cloud security service as part of their “WorkloadIQ vision,” helping customers move workloads into the cloud.


Alex Williams asserted Cloud Security Technology Should Exceed Expectations in this 8/24/2010 post to the ReadWriteCloud:

Cloud security can be a bit confusing at times. What comes with the topic are lots of contradictions. That's without a doubt.

cloud_picture_aug10.jpgFor example, Tom Mornini co-founded Engine Yard. He wrote a commentary piece for ZDnet that compares cloud security to the Maginot Line.He describes how an on-premise environment can be a trap in some ways. You think it is safe behind lock and key. But intrusions continue due to any number of factors. He argues that the public cloud may actually be more secure. He freely admits himself that his position may seem counter intuitive.

image "While it may sound counter-intuitive, I firmly believe that applications deployed to public clouds will prove to be more secure than those deployed on private clouds. Why? Because the on-premise approach to security is the modern day equivalent of the Maginot Line: Data security can only be guaranteed if the data is entirely secured from attacks from all directions. Putting data in a building secured by a guard in front of a large steel door is not the answer to today's security problems!"

It may seem implausible that data is safer outside the walls of the data center. The problem? The data is difficult to observe as it flows through a virtual network. Tools are needed to observe how that data flows. By watching the data, abnormalities can be examined.

Mornini makes the point that cloud security needs to go above and beyond what has been traditionally developed to protect the traditional enterprise.

Protecting the Virtual Network

In many respects, security is defined by how the network can be observed and protected from an attack.

Gary Kinghorn of the Hewlett-Packard Tipping Point team says that as more apps move onto the network the potential for attacks do intensify. A malicious app may attack another app. For instance, an app with credit card data may be attacked by a botnet. The question cons down to whether the data will be safe as it travels between virtual machines.

Tipping Point monitors this virtual machine traffic with its Intrusion Prevention System (IPS) appliances. The IPS analyzes the content of a packet traveling over a network. Tipping Point's competitors include McAfee, which markets a software-based IPS. McAfee was acquired by Intel last week.

VController is the Tipping Point software that sits in the hypervisor. It watches the traffic between virtual machines and redirects it appropriately to the IPS box if needed.

Since the traffic is passed through the IPS, it is inspected and filtered with TippingPoint's Digital Vaccine service, which uses security intelligence from TippingPoint and information from outside researchers.

The system integrates with VMware's VCenter, providing the capability to detect all the virtualized hosts and deploy policies accordingly.

Malware developers have their sights set on cloud computing. If apps can be hijacked in a virtual network then it creates a new dimension to what exploits are possible.

In the meantime, it's up to the security software market to develop a new generation of first-class technologies to counter the skepticism that is so predominant in today's market.

Hewlett-Packard covered the airfare and hotel expenses for the author to attend the company's HP Networking Day.


image<Return to section navigation list> 

Cloud Computing Events

GigaOm announced its Mobilize 2010 conference to be held at San Franciso’s Mission Bay Conference Center on 9/30/2010:

Game Changers -The People Calling the Shots
image This year's Mobilize conference on September 30 will be a high-level meeting of mobile stakeholders, from technology executives to VCs, who will discuss and debate the most game-changing aspects of the mobile industry today. Register today for Mobilize 2010 and save $100.

image The entire agenda and speaker roster can be found here, but we wanted to highlight three such topics and the speakers that will be covering them.

Design and Ethnography*
How will usage, lifestyle trends and technology adoption affect mobile device design? We have seen iPhone and iPad slash the competition, but will the slick veneer remain? Mobilize will feature some of the biggest names in design. Learn what's next from the experts.

  • Yves Béhar, Founder, Fuseproject
  • Mike Kuniavsky, CEO, ThingM
  • Christian Lindholm, Partner and Director, Fjord
The Internet of Things
The M2M or "Internet of Things" proposition opens up a vast new array of opportunity for carriers, entrepreneurs and consumer experiences. We look at some of the biggest markets out there - medicine, consumer goods, automotive and more. What needs to be done to catalyze the opportunity and what returns will these markets yield?
  • Derek Kuhn, VP of Emerging Technology and Media, Alcatel-Lucent
  • Mike Kuniavsky, CEO, ThingM
  • Doug VanDagens, Director, Connected Services, Ford Motor Company
Mobile, meet Cloud...
What will the "mobile cloud" do for innovation? How will wireless broadband networks enable consumer adoption of cloud services for mobile? Which new mobile web technology areas are being funded and why?
  • Ken Denman, CEO, Openwave Systems
  • Amir Lahat, Head of Corporate Business Ventures, Nokia Siemens
  • Steve Mollenkopf, EVP and President, Qualcomm CDMA Technologies
  • Dr. Tero Ojanperä, EVP, Services, Mobile Solutions, Nokia
  • Juha Christensen, Chairman and CEO, CloudMade

See our full schedule here.

LaunchPad
The closing date for entries is August 25, 2010 at 11:59 PDT, and the winners will be announced on September 1, 2010. If your mobile startup is ready to debut at Mobilize, apply here.

Super saver pricing ends this week. Register today for Mobilize 2010 and save $100.

* In my view, ethnography is a bit off the wall for a mobile computing conference. (Microsoft is a sponsor.)

Informa Telecoms and Media announced the Cloud Mobility conference will be held at the Hotel Okura, Amsterdam, on 9/14 and 9/15/2010:

Informa Telecoms and Media is proud to announce a brand new, industry changing  event for September 2010, and the firstconference of its kind dedicated purely to the Mobile Cloud

Use of the mobile cloud is set to increase from 42.8 million consumers in 2008 to almost a billion by 2014, jumping from 1.1%to 19% of all mobile phone subscribers*. The emergence of the mobile cloud has huge ramifications for the entire mobile ecosystem, changing the way that developers build apps and how OEMs, ISPs and Operators define app selection and distribution.

The scale of growth of the mobile cloud will force competitors to not only open dialogue but also work together, once again changing the nature of the fluid communications industry.

Our brand new launch event, Cloud Mobility will provide you with the forum in which to learn from and meet with the early pioneers of thisnew era. Through a series of interactive discussions, break-out, in-depth workshops, best practice tutorials and fantastic networkingopportunities, Cloud Mobility will help you determine how to effectively monetise new revenue streams and best educate your consumer with respect to these enormous changes.

  • How can you best get your business ready for a move into the Mobile Cloud?
  • How is the Mobile Cloud set to change the industry?
  • What are the costs of moving into the Mobile Cloud and what sort of revenue
    can you expect?
  • How can you best educate your customers about the Mobile Cloud?
  • What devices will be the most suitable for the Mobile Cloud?
  • In what why must developers change to create for the Mobile Cloud?
  • Will the Mobile Cloud help solve interoperability?
  • What must you do from a legal perspective before moving to the Mobile Cloud?
  • What should you take into the Mobile Cloud and what should be left behind?

Free Operator Passes

Cloud Mobility offers free passes for mobile operators. Join

  • Telecom Italia
  • Hutchison 3G Austria GmbH
  • Vodafone Netherlands
  • bouyguestelecom
  • Orange Israel
  • Communications & IT Commission(CITC)
  • Converged Solutions
  • LUXGSM S.A.
  • Turkcell iletisim Hizmetler A.S
  • gtel
  • Vodacom
  • Deutsche Telekom AG
  • Vodafone
  • Forum Telecom
  • ACP Group

at Cloud Mobility 2010 by clicking here!

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Liz McMillan reported that Red Hat Plans to bring the JBoss Enterprise Middleware strategy of Open Choice to the cloud in her Red Hat Outlines Platform-as-a-Service Cloud Computing Strategy post of 8/25/2010:

Red Hat, Inc., a provider of open source solutions, on Wednesday announced its vision for a comprehensive Platform-as-a-Service (PaaS) solution as part of its Cloud Foundations, a portfolio that will promote consistency between enterprise applications and the cloud. Red Hat is the only vendor that has the infrastructure capable of delivering an open source, flexible cloud stack, incorporating operating system, middleware and virtualization. Based on JBoss Enterprise Middleware, Red Hat PaaS is designed to be the solution within the portfolio that will allow enterprises, cloud service providers, ISVs and Software-as-a-Service (SaaS) providers to take existing assets and develop new applications and deploy them to a wide range of public and private clouds.

image Red Hat plans to bring the JBoss Enterprise Middleware strategy of Open Choice to the cloud through flexible deployment options in addition to a choice of development frameworks and languages. Red Hat's next-generation PaaS solution will be designed to simplify the development of new simple web applications as well as complex, transactional enterprise applications and integrate them into an enterprise. Additionally, Red Hat PaaS is expected to offer a comprehensive reference architecture to enable existing applications to be re-purposed within a wide choice of private and public clouds, protecting existing investments. With Red Hat PaaS, enterprises, cloud service providers, ISVs and SaaS providers will have the opportunity to leverage their existing skills without rewriting applications.

"Application infrastructure (middleware) is a key technology layer in enterprise computing, and it is of equal role and importance in cloud computing as well. To achieve the full cost, agility, productivity and scale benefits of cloud computing, applications must be deployed over a native cloud-enabled application infrastructure. Mainstream organizations must prepare to evaluate a full range of deployment options, including cloud, when planning their future application infrastructure investments," said Yefim Natis, vice president, Distinguished Analyst, Gartner, Inc.

Red Hat PaaS Built on JBoss Open Choice
Red Hat plans to make PaaS available as software offered as a service in public or private clouds to help developers and organizations build, deploy and manage the entire life cycle of applications. Red Hat PaaS solutions will be based upon Red Hat’s JBoss Enterprise Middleware, a comprehensive product portfolio for application and integration services, and Red Hat’s cloud engine for lifecycle management of applications.

Support Programming Models of Choice – Red Hat PaaS will be based on the JBoss Open Choice strategy, which enables developers to build applications in their programming framework or language of choice, including Java EE, POJO, Spring, Seam, Struts, GWT, Groovy and Ruby. JBoss Developer Studio is expected to include a series of Eclipse plug-ins to deploy applications into a JBoss platform instance within a cloud. This will allow developers to leverage existing skills and not force them to start over with proprietary cloud APIs.

Deployment Portability & Interoperability – JBoss cloud images are expected to be available through a variety of public and private clouds including Red Hat Enterprise Linux, Red Hat Enterprise Virtualization, Amazon EC2, Windows Hyper-V and more through Red Hat's cloud engine. This feature is designed to enable enterprises to migrate existing applications to the cloud without rewriting them. Red Hat application engine will be designed to enable developers and IT operations staff to create JBoss platform instances in public or private clouds and provide elastic scalability.

Comprehensive Middleware Reference Architecture for PaaS-- Red Hat PaaS will offer a comprehensive set of middleware capabilities, beyond simple containers, for building, deploying and integrating applications within clouds and on-premise deployments. Red Hat PaaS is expected to include containers, transactions, messaging, data services, rules, presentation experience and integration services.

Entire Application Lifecycle – Red Hat PaaS will seek to provide all of the services necessary for application lifecycle management, including building, deploying and managing. JBoss Operations Network offers the tools to manage and monitor JBoss platform instances in the cloud.

“Our enterprise customers leverage the JBoss Open Choice strategy today to protect existing investments while retaining the ability to choose the right developer environment for the problem at hand and skill set,” said Craig Muzilla, vice president and general manager, Middleware Business Unit at Red Hat. “With growing interest in the benefits of cloud computing, enterprises are looking to leverage cloud deployment of existing applications as well as develop new applications in the cloud. We believe that Red Hat PaaS will be ideally suited to deliver the flexibility required by CIOs to respond to business needs with rapid development and deployment and simplified management.”

Red Hat PaaS Aims to Optimize Application Lifecycle

Today, enterprises can begin to deploy JBoss Enterprise Middleware in private clouds by using Red Hat Consulting services and offerings from partners. Red Hat offers a comprehensive suite of services across the application lifecycle designed to move existing deployments or develop new applications to private and public clouds and manage those cost-effectively.

Over time, Red Hat PaaS is expected to expand to areas such as testing and QA services, automated elasticity, provisioning, deployment services for building multi-tiered, multi-service applications and metadata management across services.


Maureen O’Gara asserted “To maintain Eucalyptus' compatibility with Amazon it's been given S3 versioning” as a preface to her Eucalyptus Cloud Project Revved post of 8/25/2010:

Eucalyptus Systems was expected to update the eponymous open source private cloud project Wednesday improving the free GPL-based widgetry's scaling.

Eucalyptus 2.0, described as a major rev, is supposed to be able to support massive private and hybrid clouds. Its performance has also been enhanced and it should deploy without modification on existing IT infrastructure.

image The company can't quantify exactly how scalable the thing is but its notion of scalability includes both front-end transactional scalability and back-end resource scalability. Eucalyptus 2.0 provides increased back-end cluster as well as node controller scale improvements.

The widgetry also supports iSCSI targets for EBS volumes now, which is supposed to make overlaying a Eucalyptus cloud on top of existing IT infrastructure easier. Users can move the EBS controller machine anywhere on the cloud, including outside the broadcast domain of the cloud nodes.

KVM virtio support has been added. KVM virtio is an abstraction for hypervisors and a common set of I/O virtualization drivers. It means users can choose between emulated device drivers or direct kernel-supported I/O devices via virtio for performance tuning.

To maintain Eucalyptus' compatibility with Amazon it's been given S3 versioning. Users can do version control on the objects stored in Eucalyptus Walrus. Through the Eucalyptus API, users can retrieve specific versions of objects.

The rev can be downloaded at http://open.eucalyptus.com.

The company says it's enhanced the site to make submitting patches easier and more transparent. It's got online click-through agreements so community members can e-sign the Contributor License Agreement (CLA) and every contributor can now view and track issues on the Eucalyptus issue tracker portal as well as see contributions made by the community. And there's a new section for the community to contribute directly to the Eucalyptus documentation.


<Return to section navigation list> 

0 comments: