Tuesday, January 25, 2011

Windows Azure and Cloud Computing Posts for 1/25/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Panagiotis Kefalidis (@pkefal) reported a Bug in SQL Azure documentation on how to calculate DB size in a 1/25/2011 post:

image A fellow Windows Azure MVP, Rainer Stroper, had a very interesting case recently were he got a "reached quota" message for his SQL Azure database, although the query was indicating he was using about ~750MB on a 1GB size Web Edition database.

imageThe problem was narrowed done to a bug in the documentation ( http://msdn.microsoft.com/en-us/library/ff394114.aspx) and the correct one to use is this, as per Microsoft's Support suggestion:

SELECT SUM(reserved_page_count)*8.0/1024 + SUM(lob_reserved_page_count)*8.0/1024 FROM sys.dm_db_partition_stats

in order to take accruate metrics.

Be sure you use that, so you won't have any unpleasant suprises.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Glenn Gailey [pictured below] reported OData Browser Available for Windows Phone 7 in his MSDN blog post of 1/24/2011:

image Phani [Raju] has written a nice WP7 app that enables you to browse an OData feed on your Windows phone. I got a chance to play with it last week when I helped with some usability testing. The app comes preloaded with several OData feeds (Netflix is the most interesting of these), and you can add your own. The app demonstrates client-side paging (which is configurable) and it leverages the pivot control to effectively provide different views of entity data, including the Atom entry, navigations, and media resource data. It also tombstones feed data elegantly when the app is deactivated and reactivated.

imageHere’s a screen capture of the Titles feed from Netflix in the OData Browser.

WP7Snapshot

You can get the OData Browser app from Marketplace:  http://bit.ly/ODataWp

Phani has more info on his blog:  http://blogs.msdn.com/b/phaniraj/archive/2011/01/18/odata-browser-for-windows-phone-7.aspx

For more general information on accessing an OData feed on a Windows Phone, see Open Data Protocol (OData) Overview for Windows Phone.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure AppFabric Team Blog explained How Advanced Telemetry became instantly profitable after migrating to the Windows Azure Platform in a 1/25/2011 post:

We recently published this case study on Advanced Telemetry, a startup that offers an extensible, remote, energy-monitoring-and-control software framework.

The company migrated its telemetry solution to the Windows Azure Platform in order to expand into new vertical markets without having to incur the costs of investing in server infrastructure.

image722322As part of their solution the company used Windows Azure, SQL Azure and Windows Azure AppFabric.

AppFabric Service Bus is used by the developers in order to build more flexibility into its middleware tier.

In a recent interview with MSDN, Tom Naylor, Founder and Chief Technology Officer of the company, is quoted saying:

Windows Azure, SQL Azure, and Windows Azure AppFabric are business enabling technologies that we’re using as a new computing paradigm to build our business through OEMs. When we signed our first OEM license deal, we became instantly profitable for the first time. We’ve also reduced our IT infrastructure expenses by 75 percent and marketing costs by at least 80 percent. SQL Azure and Windows Azure AppFabric offered all the compute and data storage services that we needed to customize our telemetry software for OEMs wanting to offer the product in different vertical markets, helping to generate a new revenue stream for us. At the end of the day, Windows Azure changed our world.

You can read more details in the full interview with Naylor on the SQL Azure Team Blog,and the case study.

If you want to learn about additional Windows Azure AppFabric case studies be sure to check the Featured Content section on our website. clip_image001

You can also start enjoying from the benefits of the Windows Azure Platform. Check out our free trial offer and get started!


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Colbertz announced Code sample update in January, 2011 - Microsoft All-In-One Code Framework in a 1/25/2011 post to the All-In-One Code Framework Blog:

The code samples in Microsoft All-In-One Code Framework are updated on 2011-1-23.

Download address: http://1code.codeplex.com/releases/view/59640#DownloadId=201866

You can download individual code samples or browse code samples grouped by technology in the updated code sample index.

If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/.

--------------

imageNew Azure Code code samples

CSAzureBingMaps and VBAzureBingMaps

Download
C# version: http://1code.codeplex.com/releases/view/59639#DownloadId=201758
VB version: http://1code.codeplex.com/releases/view/59639#DownloadId=201818

The AzureBingMaps sample is an application sample demonstrating how to design and build a solution that combines multiple cloud services and client devices. It uses the following cloud services:

  • Windows Azure
  • SQL Azure
  • Windows Azure AppFabric
  • Windows Live Messenger Connect
  • Bing Maps

It also contains client applications for the following devices:

  • A HTML client for web browsers
  • A Silverlight client for Windows PCs and Macs
  • A Silverlight client for Windows Phone devices

The sample also demonstrates a lot of technologies, such as Entity Framework, WCF, jQuery, and so on.

You can find a series of blog posts describing it on http://blogs.msdn.com/b/windows-azure-support/archive/2010/08/11/bring-the-clouds-together-azure-bing-maps.aspx. A live demonstration of the HTML client can be found at http://azurebingmaps.cloudapp.net/HtmlClient.aspx. A live demonstration of the Silverlight client can be found at http://azurebingmaps.cloudapp.net/SilverlightClient.aspx. Note we don’t promise we’ll always keep the live demonstrations alive.

Here is a screenshot of the sample application:

Colbertz continues with details of:

  • New Windows General and IE code samples
  • New Windows Forms code samples
  • New ASP.NET code samples

To learn more about the program, check out the Microsoft News Center tells the story of All-In-One Code Framework post of 1/17/2011


Andy Cross (@andybareweb) reported a Localization problem with MonAgentHost in Azure SDK 1.3 in a 1/25/2011 post to his Bare Web site:

imageOver the last few days, I have been following and participating in a conversation on the MSDN forum called Strange Diagnostics Errors. The problem seems to be with the Azure SDK version 1.3 and localized versions of Windows running the MonAgentHost. The error – raised by EnCey in de-AT localised version of Windows and confirmed by Xavi Paper with a es localised version – sees many errors posted to the Emulator Computer UI such as:

[MonAgentHost] Error: MA EVENT: 2011-01-18T12:34:13.898Z
[MonAgentHost] Error:   2
[MonAgentHost] Error:   7840
[MonAgentHost] Error:   3908
[MonAgentHost] Error:   SelfMonitoring
[MonAgentHost] Error:   0
[MonAgentHost] Error:   x:\rd\rd_fun_stable\services\monitoring\agent\dll\selfmon.cpp
[MonAgentHost] Error:   MASelfMon::GetProcCntrs
[MonAgentHost] Error:   1417
[MonAgentHost] Error:   ffffffffc0000bb8
[MonAgentHost] Error:   0
[MonAgentHost] Error:
[MonAgentHost] Error:   PdhAddCounter(\Process(MonAgentHost#0)\ID Process) failed

imageIn the course of my investigations, I found that the problem was with the attempt at using the non-localized Performance Counter name by the Azure SDK. In an attempt to help my fellow Europeans, I tried many things but haven’t yet come up with a solution. For all you people out there with localized versions of Windows, the best advice I can offer is to ignore the error and post on the forum linked above any workarounds you can come up with.

EnCey confirmed that the problem doesn’t seem to prevent logging to the WADLogsTable, which was his primary goal. I hold out a firm hope that Microsoft will resolve this bug quicky for their worldwide partners who aren’t as lucky as us native English speakers with their choice of OS flavour (not intentional use of en-GB).

The difficulty is purely in using the Console UI, I don’t believe the problem extends to any errors going into WADLogsTable, and so you could try to migrate your primary debugging info from the Console to the Azure Diagnostics API and the WADLogsTable.

I will update with any further information that I have. If anybody else has seen this issue, please let me know.


Steve Marx (@smarx) announced My Blog is Now Running on Ruby in Windows Azure on 1/24/2011:

image Lately I’ve been playing around with Ruby in Windows Azure… my blog’s now running on Ruby in Windows Azure. I rewrote my blog engine (previously ASP.NET MVC) in Sinatra. As before, blog posts and images are stored in Windows Azure storage, so I’m making use of the waz-storage gem to interact with storage from Ruby. I’m using Application Request Routing in IIS as a reverse proxy in front of a couple instances of Thin.

imageAn unintended side-effect of all of this is that those who subscribe to my blog probably saw ten new posts today (just old ones showing up again in the feed). I tried to prevent that by keeping the feed URLs and IDs the same, but it looks like I missed something.

This would be a really good time to switch to using the new RSS and Atom feeds: http://blog.smarx.com/rss and http://blog.smarx.com/atom. As soon as I can, I want to get rid of the old /atompub.svc/… URLs.


Avkash Chauhan engaged in Dissection of a Windows Azure SDK 1.3 based ASP.NET Web Role in Full IIS mode & HWC in a 1/24/2011 post:

image Let's start from the point that you have an ASP.NET based WebRole as MainWebRole.DLL which you have created using Windows Azure SDK 1.3. The ServiceConfiguration.CSDEF setting can run your webrole in following two modes:

image1. Full IIS Mode

2. HWC (Hostable Web Core) Mode

Full IIS Mode:

Let's Start from Full IIS Mode. In this mode you will have your ServiceConfiguration.CSDEF will have a section name <Sites> as below:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="AzureVMAssistant" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="MainWebRole">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
      <Import moduleName="Diagnostics" />
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
    </Imports>
  </WebRole>
</ServiceDefinition>

Above highlighted <Sites> part is important as it makes your application to run in Full IIS mode. When you RDP to your Windows Azure VM you will see two processes are handing your service:

1. WaIISHost.exe

2. s3wp.exe

If you look for the .net assemblies in both the above process you will see the MainWebRole.DLL is loaded with both the process.

WaIISHost.EXE

w3wp.exe

HWC (Hostable Web Core) Mode

In this mode you will have your ServiceConfiguration.CSDEF will MUST have cmmented the <Sites> section as below:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="AzureVMAssistant" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="MainWebRole">
    <!--<Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>-->
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
      <Import moduleName="Diagnostics" />
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
    </Imports>
  </WebRole>
</ServiceDefinition>

Just making the above change caused your application to run very differently. You will see only one process WaWebHost.EXE is taking care of your application and our MainWebRole.DLL is loaded in this process as below:

Summary:

The gist to understand here that there is a significant difference in how your code is hosted in Windows Azure depending on whether you choose HWC or Full IIS.

As you know that your Web Role have two important pieces:

1. RoleEntryPoint (OnStart method of your WebRole class which derives from RoleEntryPoint)

namespace MainWebRole
{
    public class WebRole : RoleEntryPoint
    {
        public override bool OnStart()
        {
            // For information on handling configuration changes
            // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
            return base.OnStart();
        }
    }
}

2. Your WebSite Content

When you have select to use HWC Mode:

  1. The RoleEntryPoint methods  and your web site itself run under the WaWebHost.exe process.
  2. In the following diagram you can see how MainWebRole.dll is hosted in WaWebHost.EXE

When you have select to use full IIS Mode:

  1. RoleEntryPoint runs under WaIISHost.exe,
  2. Your web site runs under a normal IIS w3wp.exe process.

In the following diagram you can see how MainWebRole.dll is hosted in WaIISHost.EXE & w3wp.exe

When you decide to upgrade your ASP.NET web role from Windows Azure SDK 1.2 to Windows Azure SDK 1.3, please consider following scenarios as below:

1. Configuration Settings in Full IIS Mode:

You will see on common problem with full IIS that code running in RoleEntryPoint cannot read web.config and cause error.

The reason for this problem is that your web site uses web.config as configuration file however RoleEntryPoint does not have access to web.config because RoleEntryPoint code is running in separate process (WaIISHost.exe) then the website (w3wp.exe).

The WaIISHost.exe process looks configuration settings in a file name WaIISHost.exe.config so you will need to create a file name WaIISHost.exe.config with all the necessary settings and place along with your application to avoid such problem.

2. Problem accessing Static Members from your Web site and RoleEntryPoint

As RoleEntryPoint and Website is running on two separate processes the static members will not be shared in between these two.


Riccardo Becker (@riccardobecker) described VM Role considerations in a 1/24/2011 post to his Encloudify on Azure blog:

image After experimenting a lot getting the VM role to work a few considerations:

  • Take some time (a lot of time actually) to prepare your image and follow all prerequisites on http://msdn.microsoft.com/en-us/library/gg465398.aspx. Two important steps to take: build a base image VHD which will be the parent of all your other differencing disks. Differencing disks contain the specific characteristics of the VM role to upload and run. Typically you won't run your base VHD (it's just W2008R2) but it's the differencing disks that have the value add. Think of a development environment containing Visual Studio and other tools for your developers and/or architects, a specific VHD for testers having the test version of VS2010 installed, desktop environments with just Office tooling etc.
  • Don't bother trying to upload your sysprep'd W2008R2 VHD from Windows 7 :-)
    For some reasons after creating the VHD with all the necessary tools on it, the csupload still causes some Hyper-V magic to happen. The thing is, Hyper-V magic is not on Windows 7.
  • Use the Set-Connection switch of the csupload app to set a "global" connection, written to disk, in your command session and take it from there.
  • We started struggling from here concerning the actual csupload. The following message was displayed:

It tells me that the subscription doesn't have the VM role Beta enabled yet. The things is....i did!

imageI'll just continue the struggle and get it to work....if you have suggestions please let me know, here or on twitter @riccardobecker.


<Return to section navigation list> 

Visual Studio LightSwitch

Dan Moyer (@danmoyer) started a two-part series with a very detailed How to Connect to a WCF service from LightSwitch- Part One post of 1/24/2011 with downloadable source code:

image Recently a question Can LS be used as a client to a Web Service? was asked on the LightSwitch forum

The question got me thinking about how does one connect to a legacy WCF in a LightSwitch application? Many shops have legacy web services which they may want to use from LightSwitch application.

image2224222Because I’ve implemented WCF solutions in the past, I thought connecting to a WCF service from LightSwitch would be a piece of cake. As it turned out, it became a learning experience as I needed to solve a few problems I hadn’t encountered before. Because of the amount of content, I plan to make two posts to explain how I solved the problem and some of the issues I encountered.

The scenario is a business needs to get current pricing information for a product. The user wants to have product price information updated from a LightSwitch screen when user clicks a button.   On the button click, the screen’s code connects to a web service to get current product pricing and updates the product table.

In this blog post I’ll discuss creating the WCF service and a proxy to that service which you can use in LightSwitch.

In the next blog post, I’ll show you how to connect to the WCF service from a LightSwitch screen and update the product table.

Creating the WCF Service

Because I’ll be creating a WCF service and deploying the service to IIS on my local development box, I first start Visual Studio 2010 with administration rights.

Next I create a blank solution called Product.Wcf

clip_image002

When creating a WCF solution, I prefer to define one assembly for the Interface file so the service and client code can use the same interface definition. For this demo I’ll simplify the solution and put the interface and the service code into the same project.

Add a WCF Service Library project to the Product.Wcf solution. Call it Product.Wcf.Service:

clip_image004

Delete the App.config, IService1.cs and Service1.cs files which Visual Studio added to the project;

clip_image005

Add two files to the project: IProductService.cs and ProductService.cs.

I want two methods for this WCF demo- GetProductPriceUpdate() and GetAllProductPriceUpdate().

I want the GetProductPriceUpdate() to return price information using a passed in product identifier.

I want GetAllProductPriceUpdate() to return a collection of products with price updates.

I want the returned data contained in an object called ProductPriceInfo. ProductPriceInfo holds the product number and the new list price and standard price for the product.

With these requirements in mind, the IProductsService.cs file looks like this:

clip_image007

For the service implementation, I’ll simulate reading results from a back end database by just returning a fixed result set. The simple service implementation becomes:

clip_image009

GetProductPriceUpdate returns a price update for product “FR-R928-58”, otherwise it returns nothing.

GetAllProductPriceUdpate() returns price updates for three products.

With the WCF service implementation defined, I want to deploy the WCF service to my development machine’s IIS server.

First create new project of type WCF Service Application and call it Product.Wcf.WEB

clip_image011

Delete the IService1.cs and Servcie1.svc.cs files Visual Studio includes in the project and rename Service1.svc to ProductService.svc

clip_image012

Next, add a reference to the Product.Wcf.Service assembly from the Product.Wcf.Service project.

clip_image013

Next I open the ProductService.svc file and change the Service tag which reference the fully qualified name of the service code implementation:

clip_image014

Publish the WCF Service

The next step is to publish the web site. On my development machine, I entered the selections highlighted below:

clip_image001

Finally, start IIS manger to configure the site and test it.

When you first start IIS manager after publishing, you should see a folder labeled ProductService under the Default Web Site. Right click this folder and click the Convert to Application in the pop up context menu. The icon should change from a folder to a Site icon:

clip_image002[6]

Next, configure the site to enable directory browsing:

clip_image004[5]

With the ProductService site selected in the left pane, double click Directory Browsing.

Then click Enable in the Directory Browsing dialog:

clip_image006

Next click Browse to start an instance of Internet Explorer:

clip_image008

You should see the following in Internet Explorer.

clip_image009[5]

Assuming all is configured correctly, you should see the following when you click ProductService.svc:

clip_image011

Write down the line circled above, you’ll need it in a moment.

Test the WCF Service

At this point, your web service is deployed and running. It’s always a good practice to test the code works, so let’s next create a simple console application to test the WCF service.

Add a new project to your solution- create a Console Application project and call it Product.WcfConsoleTest

clip_image002[8]

Now, start a Visual Studio 2010 Tools command prompt. If you haven’t done this before, navigate to:

Start-> All Program -> Microsoft Visual Studio 2010 -> Visual Studio Tools -> Visual Studio Command Prompt (2010)

In the command prompt, navigate to the directory of your ConsoleTest project.

Then run the command you saw above in the Internet Explorer window.

clip_image004

Svcutil generates a file ProductWcfService.cs which contains the proxy code to connect to your WCF service. Add the ProductWcfService.cs file to your ConsoleTest project:

clip_image005[4]

Next, add the System.ServiceModel and System.Runtime.Serialization assemblies to your ConsoleTest project references:

clip_image006

Next, write some code to test the WCF service in Program.cs:

clip_image008

This simple test creates an instance of the ProductService proxy, using the address of the WCF service which you saw in Internet Explorer above. The test then verifies the GetProductPrice returns a null ProductPriceInfo for a product other than “FR-R92B-58”. It then checks that GetAllProductProiceUpdate() returns three ProductPriceInfo, as implemented in our service.

After running this test, I thought, “Great! Now I can use the generated ProductWcfService.cs proxy code in my LightSwitch application”.

Well, I found out this was a very wrong assumption. First, when I tried to add a reference to an assembly containing this proxy to a LightSwitch project, Visual Studio showed the reference added, then in a few seconds removed it. Uh?!? I thought.

How to create a proxy you can use in LightSwitch

I fiddled around and soon realized the generated proxy code I used in the ConsoleTest project (generated by SVCUTIL.exe) cannot be used in a LightSwitch application, which is in reality a Silverlight application. I also found Silverlight requires a different version of the ServiceModel and Runtime.Serialization assemblies. And Silverlight requires the proxy to have asynchronous interfaces on the method calls. SVCUTIL creates synchronous method calls which won’t work in the LightSwitch environment.

After debugging in LightSwitch and exploring deeper, I found several pages which helped me come up with a solution. Here are references to pages which helped me and which you may want to read and drill deeper:

I discovered one way to create a usable proxy is by using the utility SLsvcutil.exe. I also learned I needed to call the WCF methods asynchronously. Because I want to hide the complexity of asynchronous calls from the LightSwitch code, I decided to write a façade class. A proxy to a proxy you might say.

I also learned you should use a Silverlight Class library instead of a regular .NET Class library project in order to pick up references to the correct versions of the ServiceModel and Runtime.Serialization assemblies.

With the above in mind, add another project to the Product.Wcf solution. The project type is a Silverlight Class Library.  Call the new project Product.Wcf.LSProductProxy:

clip_image002[10]

Delete file Class.cs from the Product.Wcf.LSProductProxy project.

Add references to the ServiceModel and Runtime.Serialization assemblies. Notice the version numbers of these assemblies are 2.0.5.0. A little later as you’re working in the LightSwitch project, you’ll notice these are the same versions used by LightSwitch.

clip_image004[5]

Next, in the Visual Studio 2010 command prompt, navigate to the directory of your Product.Wcf.LSProductProxy project. Run the SLSvcUtil.exe utility to generate a proxy usable in a Silverlight environment. On my machine, the SLSvcUtil.exe is located here:

c:\Program files (x86)\Microsoft SDKs\Silverlight\v4.0\Tools\

clip_image006[6]

Add the generated file, ProductWcfService.cs, containing the proxy to the WCF service to your project.

Add another file called LSProxy.cs to your project. LSProxy will implement the calls to the WCF service and encapsulate the complexities of managing asynchronous method calls.

Here is the implementation of LSProxy:

clip_image008[5]

In lines 27 and 28, the constructor wires up the event handlers for the asynchronous calls:

clip_image010

In line 22 and 23, the constructor initializes an AutoResetEvent for each asynchronous method:

clip_image011[5]

The following is what happens when the LightSwitch application makes a WCF service call. For example, when the LightSwitch code calls LSProxy.GetProductPriceUpdate the following code executes:

clip_image013

The LightSwitch code calls LSProxy.GetProductPriceUpdate(string product). This method calls the WCF asynchronous method GetProductPriceUpdateSAsync(product) implemented in the generated code file, ProductWcfService.cs. A call to the WCF service is initiated and control returns immediately to LSProxy.GetProductPriceUpdate. The thread which makes the call to GetProductUpdateAsync blocks waiting for a signal from the AutoResetEvent (_autoResetEventGetProductPriceUpdate.WaitOne).

The thread executing LSProxy.GetProductPriceUdpate is the UI thread that is running the LightSwitch screen. Note that as this thread is blocked, this can block the LightSwitch application. Handling this problem may be as simple as providing a timer in the WaitOne call, or some more complex solution such as showing a pop up display with animation which the user can cancel may be needed for very long running processes—a topic for a future blog post.

When the WCF service returns data to the proxy, the data comes in on a different threat of execution and runs _proxy_GetProductPriceUpdateCompleted. This method copies the received data into the private member variable and unblocks the thread which called GetProductPriceUpdate.

Being unblocked, the main thread gets the data from the member variable and returns the data to the LightSwitch application.

Summary

In this post I demonstrated how to create a simple WCF service, deploy it, test it from a console application, and create a proxy to the service which you can use in a LightSwitch application.

In my next post, I’ll show how to use the proxy to make WCF service calls and update data within LightSwitch.

Project source files ProjectWcfSolution.zip


Return to section navigation list> 

Windows Azure Infrastructure

Buck Woody continued with his Windows Azure Use Case: Agility with a 1/25/2011 post:

image This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx

Description:

Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process.

Implementation:

When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process.

A simplified view of an ALM process is as follows:

  • Requirements
  • Analysis
  • Design and Development
  • Implementation
  • Testing
  • Deployment to Production
  • Maintenance

In an on-premise environment, this often equates to the following process map:

Requirements Business requirements formed by Business Analysts, Developers and Data Professionals.
Analysis Feasibility studies, including physical plant, security, manpower and other resources.

Request is placed on the work task list if approved.
Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise.
Implementation Code checked into main branch.

Code forked as needed.
Testing Code deployed to on-premise Testing servers.

If no server capacity available, more resources procured through standard budgeting and ordering processes.

Manual and automated functional, load, security, etc. performed.
Deployment to Production Server team involved to select platform and environments with available capacity.

If no server capacity available, standard budgeting and procurement process followed.

If no server capacity available, systems built, configured and put under standard organizational IT control.

Systems configured for proper operating systems, patches, security and virus scans.

System maintenance, HA/DR, backups and recovery plans configured and put into place.
Maintenance Code changes evaluated and altered according to need.

image

In a distributed computing environment like Windows Azure, the process maps a bit differently:

Requirements Business requirements formed by Business Analysts, Developers and Data Professionals.
Analysis Feasibility studies, including budget, security, manpower and other resources.

Request is placed on the work task list if approved.
Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise.
Implementation Code checked into main branch.

Code forked as needed.
Testing Code deployed to Azure.

Manual and automated functional, load, security, etc. performed.
Deployment to Production Code deployed to Azure.

Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric)
Maintenance Code changes evaluated and altered according to need.

This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario.

An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time.

Resources:


Klint Finley (pictured below) discussed The State of the Platform and its Increasing Diversity in a 1/24/2011 post to the ReadWriteCloud blog:

image As RedMonk's Stephen O'Grady points out, platform-as-a-service wasn't a hot topic for most of 2010. But interest has exploded since Salesforce.com purchased Heroku and Red Hat purchased Makara. Last week Amazon Web Services launched its Elastic Beanstalk PaaS, week before last PHPCloud raised $1.8 million and we've been seeing Node.js hosts cropping up steadily for the past few weeks.

Cloud platformWe predicted last year that PaaS would emerge as the hottest area of cloud computing. Today we'll take a look at the state of PaaS. If you're still confused about the difference between IaaS, PaaS and SaaS check out CloudAve's guide to the terminology.

The Big List of PaaS Providers

image We were going to make a list of all the "Heroku for X" type PaaS providers we could find. Then we found Roch Delsalle's big list. It's the most comprehensive list we've seen, and Delsalle has been good about updating it with newcomers.

Delsalle lists PaaS providers for the following platforms:

  • Ruby
  • Python
  • PHP
  • Drupal
  • .NET
  • Java
  • Node.js
  • RingoJS
The Three Types of PaaS

For an interesting take on the different approaches to PaaS, check out Subraya Mallya's article "Multiple Personalities of Platform-as-a-Service." Mallya looks at the different types of PaaS:

  • Application Development Platforms
  • Application Management Platforms
  • Data Processing Platforms

Mallya also covers what capabilities you should look for in each type of PaaS.

What Developers Look for in PaaS

RedMonk's Michael Coté takes a look at how developers decide on a PaaS. Coté notes that for many developers, the financial opportunities presented by proprietary platforms with established user bases (for example, Force.com) will often trump fears of vendor lock-in. However, those building full applications with an eye to the future tend to prefer open platforms like Heroku. Coté also looks into why companies, particularly middleware vendors, are getting into the PaaS business.

The Future: Certification

Last week we made the case for the need for third-party certification to ensure customers that PaaS providers follow best practices. In 2009, Alan Wilensky wrote a four part series examining the case for cloud provider certifications, including some ideas for what to include.

For a further look at PaaS and the future of the cloud, please see our report The Future of the Cloud: Cloud Platform APIs are the Business of Cloud Computing.

Photo by LukeGordon1

It’s my opinion that PaaS was a “a hot topic for most of 2010” and still is in early 2011.


James Urquhart analyzed 'Compute efficiency' and cloud computing in his 1/24/2011 post to C|Net News’ Wisdom of Clouds blog:

image Energy analogies abound with respect to cloud computing and its effect on enterprise IT operations and economics. Nick Carr's seminal work, "The Big Switch," laid out the case for why computing will be subject to many of the same forces as the electricity market was in the early 20th century. While I've pointed out the analogy isn't perfect, I will say there are often interesting parallels that are worth exploring.

One example is the ongoing discussion about the effect of cheaper computing on the reduction (or lack thereof) of future IT expenditures. Simon Wardley, a researcher at the CSC Leadership Forum, has often pointed out that, while cheaper operations costs and reduced capital spending should signal a reduction in spending, the truth is quite the opposite.

Wardley points to a 19th century economist by the name of William Stanley Jevons, who outlined why this won't be so. The so-called Jevons paradox is explained as follows:

In economics, the Jevons paradox, sometimes called the Jevons effect, is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

Author and blogger Andrew McAffee presented an excellent overview of Jevons' 1865 study of the effects of more efficient coal furnaces on the consumption of coal:

As coal burning furnaces become more efficient, for example, British manufacturers will build more of them, and increase total iron production while keeping their total coal bill the same. This greater supply will lower the price of iron, which will stimulate new uses for the metal, which will stimulate demand for more furnaces, which will mean a need for more coal. The end result of this will be, according to Jevons, "the greater number of furnaces will more than make up for the diminished consumption of each."

McAffee goes on to point out that the history of lighting--from ancient Babylonia to today--reinforces this effect. Does the fact that it now takes a tiny fraction of the man-hours it once did to produce an hour of light mean we consume less energy on lighting? No.

For me, there is an interesting parallel between computing and lighting, at least with respect to the Jevons paradox. Cloud computing is the latest in a series of innovations that have reduced the overall cost of a "unit of work" (whatever that is) of computing and data storage.

Yet, with each innovation, we have continued to increase the amount of work our computer systems do, and continue to increase spending on new computers. And, as with lighting, we don't always seem to make the most efficient decisions about when and where to use computing.

If you've ever seen one of those "dark side of earth" photographs from space (like the one above), you know we use much more lighting than we absolutely need. Electricity is cheap, so why not use it? Whether it is to increase safety, secure property, or simply make entertainment possible, we light because we can.

I would argue the same can be said about how we use computers in business. The larger the company, the broader the application portfolio, and--quite likely--the less efficient design of the overall IT environment. Redundant functions, duplicated data, excess processing--these are all rampant among our enterprise IT systems.

Many would also argue that cloud will make this worse before it makes it better. Bernard Golden, CEO of Hyperstratus, describes the difficulty cloud brings to capacity planning (and some possible solutions). Chris Reilly, an IT professional at Bechtel, uses the Jevons paradox to explain virtual-machine consumption data from a real-world IT operation.

Perhaps the most cautionary tale for me, however, is what happened to IT operations with the introduction of "cheap" x86 servers in the 1990s. I was doing software development back then, and I cringe thinking of all the times I or a colleague justified increasing the capacity of an application infrastructure with "hey, servers are cheap."

Does anyone remember how "complex" IT was before all those servers arrived? Anyone want to argue IT operations got easier?

Similarly, the availability of cheap compute capacity in the cloud is going to drive inefficient consumption of cloud resources. Yes, each app may optimize its use of resources for its specific need, but thousands of apps will be developed, deployed, and integrated just because its easy to do so, and "sprawl" will become a fun buzzword again.

The day will come, however, when your CFO will stop asking "how do we reduce the cost of IT infrastructure?" and start asking "how do we reduce our monthly bill?"

I want to plant a seed in your mind today, a tiny germ of an idea that I think will grow into a fully mature meme in the years to come. As you utilize cloud computing to meet latent business demands, remember to be "compute efficient."

Follow the model of FedEx and Bechtel: use infrastructure to increase the efficiency of IT systems architectures, not promote greater inefficiency. That's not to say you don't take chances on possibly innovative ideas that may fail, but that you understand the cost of compute inefficiency is more than money, but the pain of complexity in integration and operations.

In fact, if you are an enterprise architect, I think it lands on you to make sure your move to the cloud is responsible; that your company practices compute efficiency and avoids complexity at the business systems level. If you are a developer, it lands on you to think before you build and deploy, and that you make sure someone pays attention for the entire application lifecycle.

Then again, this has been the mantra of the service-oriented architecture world for over a decade now, hasn't it? Maybe compute efficiency isn't that important after all. After all, inefficiency is cheap.

(Image credit: NASA/Visible Earth)


David Linthicum asked Is the Lack of SOA Talent Killing Cloud Computing? in a 1/24/2011 post to ebizQ’s Where SOA Meets Cloud blog:

image Most consider cloud computing to be this magical technology that will solve all of world's IT problems. The reality is that you're still doing computing. You're still storing stuff, still processing stuff, still placing information in databases. This means -- Dare I say it? -- you need to put some architectural forethought around cloud computing.

image The lack of an architecture -- typically, the lack of a SOA -- is a recipe for failure in the world of cloud computing. An architecture provides the structure necessary to mesh your existing enterprise IT assets with the emerging world of cloud computing. Most who leverage clouds, PaaS, IaaS, or SaaS, understand the dilemma and quickly turn to basic architecture and planning...only to find that those 'in the know' are nowhere to be found.

Good SOA architects are a rare species. Many who claim to have mad SOA skills come up short. The trend is to leverage whatever the next magical and hyped technology is in the hopes that no one will notice that the existing architecture is a huge mess, and the addition of cloud computing resources will just make it messier.

Making matters worse are the numbers of SOA technology vendors who have falsely position their technology as "cloud computing technology," when they should be focused on SOA leading to successful cloud computing. There is a huge difference. This vendor hype has just added to confusion around both the concepts of cloud computing and SOA, and the end users are once again looking to toss technology at problems that really need better architectural thinking.

Clearly, we don't have enough SOA A list players to go around as cloud computing explodes.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Alex Williams claimed What Customers Really Need is a Flat Network in a 1/24/2011 post to the ReadWriteCloud blog:

image It is inevitable for any considerable enterprise to consider how the cloud and the data center can be combined in the most dynamic, efficient and secure way as possible.

It's a decision making process that gets even more complex when considering the dizzying terminology that makes sense only after considerable conversation.

image I kept thinking about these customers when listening to Hewlett-Packard (HP) discuss its new hybrid cloud service.

HP is announcing today that it is offering:

  • A public cloud environment for customers that want all of its IT services in the cloud.
  • An extended data center environment that leverages the security of the data center with the distributed network of a cloud environment.
  • An automated cloud management system for managing multiple cloud and data center environments.
  • Workshops to help IT managers and executives better understand the complexities of what's available and better define what they need in order to create a network that meets their needs.

image It gets complicated when Hewlett-Packard starts getting into the details of what they are offering. It's not the first time we have heard a company call what they are offering a hosted and private cloud environment. That's how HP describes its offering for what amounts to an extended data center that uses a virtualized environment to create a network that leverages the cloud and its core, on-premise environment.

But this is the nature of the market. Customers believe they need private clouds. What they really needs are flat networks to move resources in a virtual environment. The network needs to be able to manage any variety of applications that between thousands of virtual machines.

To do that, HP offers a converged infrastructure that includes the storage, servers that can be provisioned in clusters, the network, power and cooling and the management software.

My colleague Klint Finley spent a few days with HP and learned a lot about these infrastructures:

HP claims that not only can it do all the work of setting up racks, wiring, optimizing power consumption and building out the full data center for you - it claims it can do so at the fraction of the cost of doing it yourself. HP takes advantage of assembly line productivity to industrialize the process of creating data centers.

The management software is a critical component of the offering. It may also be on of the most important aspects of what is required to manage a virtualized infrastructure. It's the automation component. It orchestrates the process so virtual machines can be managed effectively.

The team at HP realizes that there is an inherent complexity in the language associated with today's complicated array of data center offerings. So we can see why workshops make sense.

In the end, though, we find the language a bit of a folly. The hardware and software that HP is selling costs millions of dollars. What it provides is a network, albeit different from previous generations, it is still a network. Not to say the technology is unnecessary, it's just more what we think of as networked data centers rather than cloud computing.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie claimed It used to be that “mobile” access implied “remote” access. That’s no longer true. As the variety of clients continue to expand along with the venues from which we users can access corporate resources the ability to intelligently enforce access-control policies also increases in strategic importance in a preface to her More Users, More Access, More Clients, Less Control post of 1/25/2011 to F5’s DevCentral blog:

canhazipadEvery time we add a new access method in the enterprise we go through a period in which we expend a lot of time and energy trying to figure out how to control that access.

  The consumerization of IT, for example, in which consumer-grade devices (gadgets) have been slowly but surely permeating every facet of the business have led to a need for IT not only to support but manage, i.e. control access from, such devices. The lure of virtual desktop infrastructure (VDI) continues to be strong, providing myriad benefits for IT in terms of management, security, and simplified support across a broader variety of clients.

But it also introduces challenges that must be addressed lest the benefits of a VDI implementation become quickly lost. Performance can be significantly impacted by the deployment of VDI, and the access-control challenges introduced are nothing if not non-trivial.

blockquote The server farm will carry a higher processing load, and will need more highly specified storage systems, by comparison with a more conventional client/server architecture. Some firms may need to fund a network upgrade also, to cope with higher data transport demands. [emphasis added]

-- Forrester Research Analyst Andrew Parker, “Desktop Virtualization — How Will It Impact Desktop Outsourcing Costs?

If prognosticators are correct, these challenges will become serious impediments to successful VDI implements as early as this year (2011). The latest projections from research firms regarding the deployment of virtual desktop technology is staggering. Gartner forecasts the install base will almost triple this year (2011), noting “HVD [Hosted Virtual Desktop] works best with well-managed environments. Currently, that means fully locked down.” 

But it is hardly advantageous to overload an already overloaded admin and operations staff by requiring yet another layer of access control specifically to address virtual desktops and mobile device access.

What’s needed is something strategic, something more intelligent that can apply access policies based on context such that access to corporate resources can be managed more consistently across the growing variety of endpoints and locations from which those resources are being requested.

CONTROL and CONTEXTimage
The one thing that is common across most emerging data center and deployment models today is control, or more accurately the loss of control it imparts on IT.

VDI, mobile endpoints, cloud computing . These technologies all share one common and complexifying attribute: they potentially erode the control IT needs over access to resources to ensure that corporate data and applications remain secure and uncompromised. Introduce a few emerging threat vectors thanks to cloud computing and virtualization into the picture and the need for access control and endpoint management becomes not just a nice to have, but a critical component to the long-term security of data, applications, and the data center network.

Even if you do have access control under, well, control, when you introduce the distributed nature of cloud computing and virtualization and you start running into problems associated with a loss of context in which to evaluate and apply that control. It’s not enough to know that User A is requesting a virtual desktop; you also need to know from where and what device that user is making such a request. It is important to understand whether User A is attempting to access resources from their home network or an Internet cafe somewhere in Bangladesh.

The ability to dynamically apply graded authentication and authorization to resources based on the context of a request is also increasingly important in a world where a user may flip seamlessly from iPhone to Windows desktop to Blackberry tablet (Hey, it’s coming. It’ll happen.) And that is a bigger problem than some might think because it’s not just iPhone on Verizon or AT&T that’s a problem, it’s an iPad that may be connected via WiFi from within your own network.

1. Unauthorized Smartphones on Wi-Fi Networks

Smartphones create some of the greatest risks for enterprise security, mostly because they're so common and because some employees just can't resist using personal devices in the office -- even if their employers have well-established policies prohibiting their use.

"The danger is that cell phones are tri-homed devices -- Bluetooth, Wi-Fi and GSM wireless," says Robert Hansen, founder of Internet security consulting firm SecTheory LLC. Employees who use their personal smartphones at work "introduce a conduit that is vulnerable to potential attack," he explains.

--  Six security leaks to plug right now, ComputerWorld (January 2011)

So it isn’t just from where, it’s from what device. You can’t just lock down applications and resources based on the network. Context-awareness is an integral – or should be an integral – part of any remote access-based strategy and increasingly it will be important to a general resource control strategy because “mobile device” no longer implicitly means “remote”. Mobile devices are inside the perimeter and they aren’t going away. VDI is gaining traction quickly because of its security and management benefits.

If you’re going to deliver a fully configured and ready to use desktop via some virtual desktop infrastructure, you need to be concerned about where that desktop might be going and who might be requesting it. The only thing you can do is control access and delivery via infrastructure solutions that are intelligent enough to enforce policies based on a combination of variables. And that control of access and delivery will almost certainly need to look inward in addition to outward and cloudward, to ensure that those policies are appropriate enforced. 

The hardest part of doing that is to do so without sacrificing performance and without blowing out your budget.


K. Scott Morrison posted Hacking the Cloud on 1/24/2011:

I’m not sure who is more excited about the cloud these days: hackers or venture capitalists. But certainly both groups smell opportunity. An interesting article published by CNET a little while back nicely illustrates the growing interest the former have with cloud computing. Fortify Software sponsored a survey of 100 hackers at last month’s Defcon. They discovered that 96% of the respondents think that the cloud creates new opportunities for hacking, and 86% believe that “cloud vendors aren’t doing enough to address cyber-security issues.”

I don’t consider myself a hacker (except maybe in the classical sense of the word, which had nothing to do with cracking systems and more about solving difficult problems with code), but I would agree with this majority opinion. In my experience, although cloud providers are fairly proficient at securing their own basic infrastructure, they usually stop there. This causes a break in the security spectrum for applications residing in the cloud.

Continuity and consistency are important principles in security. In the cloud, continuity breaks down in the hand-off of control between the provider and their customers, and potential exploits often appear at this critical transition.  Infrastructure-as-a-Service (IaaS) provides a sobering demonstration of this risk very early in the customer cycle. The pre-built OS images that most IaaS cloud providers offer are often unpatched and out-of-date. Don’t believe me? Prove it to yourself the next time you bring up an OS image in the cloud by running a security scan from a SaaS security evaluation service like CloudScan. You may find the results disturbing.

IaaS customers are faced with a dilemma. Ideally, a fresh but potentially vulnerable OS should first be brought up in a safe and isolated environment. But to effectively administer the image and load patch kits, Internet accessibility may be necessary. Too often, the solution is a race against the bad guys to secure the image before it can be compromised. To be fair, OS installations now come up in a much more resilient state than in the days of Windows XP prior to SP2 (in those days, the OS came up without a firewall enabled, leaving vulnerable system services unprotected). However, it should surprise few people that exploits have evolved in lock step, and these can find and leverage weaknesses astonishingly fast.

The world is full of ex-system administrators who honestly believed that simply having a patched, up-to-date system was an adequate security model. Hardening servers to be resilient when exposed to the open Internet is a discipline that is  time-consuming and complex. We create DMZs at our security perimeter precisely so we can concentrate our time and resources on making sure our front-line systems are able to withstand continuous and evolving attacks. Maintaining a low risk profile for these machines demands significant concentrated effort and continual ongoing monitoring.

The point so many customers miss is that cloud is the new DMZ. Every publicly accessible server must address security with the same rigor and diligence of a DMZ-based system. But ironically, the basic allure of the cloud—that it removes barriers to deployment and scales rapidly on demand—actually conspires to work against the detail-oriented process that good security demands. It is this dichotomy that is the opportunity for system crackers. Uneven security is the irresistible low-hanging fruit for the cloud hacker.

CloudProtect is a new product from Layer 7 Technologies that helps reconcile the twin conflicts of openness and security in the cloud.  CloudProtect is a secure, cloud-based virtual appliance based on RedHat Enterprise Linux (RHEL). Customers use this image as a secure baseline to deploy their own applications. CloudProtect features the hardened OS image that Layer 7 uses in its appliances. It boots in a safe and resilient mode from first use. This RHEL distribution includes a fully functioning SecureSpan Gateway – that governs all calls to an application’s APIs hosted on the secured OS. CloudProtect offers a secure console for visual policy authoring and management, allowing application developers, security administrators, and operators to completely customize the API security model based to their requirements. For example, need to add certificate-based authentication to your APIs? Simply drag and drop a single assertion into the policy and you are done. CloudProtect also offers the rich auditing features of the SecureSpan engine, which can be the input to a billing process or be leveraged in a forensic investigation.

More information about the full range of Layer 7 cloud solutions, including Single Sign-On (SSO) using SAML for SaaS applications such as Salesforce.com and Google Apps, can be found here on the Layer 7 cloud solutions page.


Jay Heiser asked Will your successors throw away your policy? in a 1/24/2011 post to his Gartner blog:

I spend a lot of my time doing policy reviews. Sometimes the review request comes from the policy author, looking for some feedback. Usually, the request comes from someone else.

One of the first things that many new infosec managers do is start on a policy rewrite.  While this is sometimes a political gesture, meant to establish the authority of a new manager, it is more often done because the existing policy is either obsolete, or poorly written.

Bad policies are counterproductive in multiple ways.  It is usually impractical to follow a poorly written policy, which sends the message to the organization that policies are merely a bureaucratic exercise that can be ignored. In some cases, policies are based on a flawed analysis of risk, requiring employees to unnecessarily restrict their activities in ways that are bad for business. This reduces efficiency, and results in a cynical attitude towards the entire security program.

Policy is often a necessary evil, putting a virtual stake in the ground of employee behavior.  ‘Good’ policy doesn’t guarantee that you will meet your security goals–not by any means. However, ‘bad’ policy will almost certainly lead to a disappointing security (or any other) program.

Make you[r] policy documents something that your successors will want to keep.


Justin Pirie asked “Why wouldn't you want standards on data portability?” as a deck for his Cloud computing standards: The great debate post of 1/17/2011 to ComputerWorldUK (missed when published):

As more organisations turn to the cloud, the need for an effective set of industry standards is becoming ever more pressing.

There is a clear divide between those who argue for implementation of cloud standards and those who argue against. At the heart of this debate is a clear need to balance the benefits of having a standard with the call for a sustained pace of innovation.

The argument against cloud computing standards relies on the premise that standards just aren’t necessary. In this sense, industry wide uniformity and standardisation is seen as something which would stifle innovation and distract focus from more specific problems. According to this train of thought, different providers need to be free to evolve solutions that best fit distinctive domain and customer needs.

The alternative ‘one voice, one system’ argument sees the lack of standards in the cloud industry as a serious problem. With the industry being void of any commonly accepted standards, vendors have nothing to hold them to account and as a result potential and existing customers have little objective information on which to base their buying decisions. A lack of homogeneity can cause a range of issues.

For instance a deficiency of inter-cloud standards means that if people want to move data around, or use multiple clouds, the lack of fluency between vendors creates a communication barrier which is near impossible to overcome. Surely companies should be able to move their data to whichever cloud provider they want to work with without being tied in for the foreseeable future?

Another issue is that there is considerable confusion around the term ‘cloud’ itself. Among vendors there is a definite trend of ‘cloud washing’ whereby less scrupulous companies re-label their products as cloud computing too readily, overselling the benefits and obscuring the technical deficiencies of what they have to offer.

This “cloud washing” is in some areas leading to a mistrust of cloud. Furthermore, with the market becoming increasingly crowded and no clear standards in place it is hard for customers to tell the difference between a cloud vendor with a properly architected delivery infrastructure and one that has patched it together and is merely using cloud as a ‘badge’.

All of this makes it increasingly difficult for customers to navigate their way through the maze of cloud services on offer and, of course, it is the customer who should be the priority throughout these discussions.

Moving forwards, there are a range of bodies that are pursing some form of resolution to the standardisation debate. However for these organisations to have a genuine impact on the industry, companies and individuals need to rally behind them and actively support their calls for universal standards.

The first standard that needs to be tackled is security. It’s the number one customer concern about transferring data to the cloud and needs to be addressed as soon as possible. The reality is that this concern is mirrored by vendors who are similarly wary of any potential security breaches and, as a result in most cases go to extreme lengths to protect their customers’ data.

In fact one cloud security firm recently estimated that cloud vendors spend 20 times more on security than an end user organisation would. Security breaches would inevitably mean the reputation of a company falling into disrepute and in worst cases mark the end of their business altogether.

Moreover, the creators of any new cloud based technology do not want to see their project fail for obvious reasons. It is those vendors that do not apply strict standards to their business that need to be called into question. An industry standard is the only way to manage this and good vendors would welcome one because they have nothing to fear from rules of best practice.

The second standard that needs to be tackled is the “Cloud Data Lifecycle”. In previous years, when a customer bought software they installed it directly on their premises. Therefore if the vendor went away they could keep running the software until they found an alternative. With an increasing number of people flocking to the cloud, how can a customer ensure they continue to have access to their data if the vendor goes out of business? It is for this reason that we need Data Lifecycle standards because currently the onus is on the customer to check the financial health of their provider.

The good news for cloud users is that there is light at the end of the tunnel. The issue of standards is no longer being sidelined but instead being addressed on a large number of platforms with contributions from some of the industry’s top decision-makers and influencers. 

For most, if not all conversations, it is simply a question of when, not if, cloud standards are established. However while the debate continues, customers will need to ensure that they are aware of the dangers and pitfalls associated, albeit rarely, with adopting a cloud service. Carrying out their own due diligence and research to ensure that their chosen technology is robust, properly architected and secure will remain an essential practice until that time.

Justin is Director of Communities and Content for Mimecast.


<Return to section navigation list> 

Cloud Computing Events

image

Matt Milner notified Pluralsight News Letter Subscribers on 1/24/2011 that Getting Started with Windows Azure [Is] Now Playing!:

This week we are excited to announce our new Windows Azure course, Introduction to Windows Azure. For the next few days we're giving all newsletter subscribers FREE access to the first module - Getting Started with Windows Azure. This course introduces you to the tools and techniques you need to build cloud applications. The modules focus on hands-on examples for using the development tools in Visual Studio and showing you how to accomplish common tasks in Windows Azure development.


Sapna announced in a thread in the Windows IT Pro Forums a Windows Azure AppFabric: Building, Managing, and Connecting High-Density, Multi-Tenant Cloud Applications [Presentation] by Clemen Vasters at the Great Indian Developer Summit at IISc in Bangalore on in April 2011:

image722322Windows Azure AppFabric is Microsoft's next-generation middleware application platform in the cloud, providing access control with federated identity, high-density, multi-tenant component-hosting, caching services, on-premise connectivity, rich publish/subscribe messaging, and integration services.

image Clemens Vasters, an Architect on the AppFabric product team at Microsoft is coming to India this April at Great Indian Developer Summit. He will provide an overview of the AppFabric services that are already commercially available and the new services that Microsoft will bring to market until the end of this year. The summit will take place from 19-22 April, 2011 at IISc, Bangalore.

Clemens Vasters is in a Technical Lead/Architect role on Windows Azure AppFabric at Microsoft Corporation. Clemens has been with Microsoft for 5 years and has worked on WCF, Modeling and Cloud Platform technologies. Before joining Microsoft he's been touring the world teaching, implementing, and speaking about distributed systems, and Web services. Clemens has written several books, contributed to numerous open source projects and was the original author of dasBlog, one of the first open source blogging engines on ASP.NET. Follow Clemens on Twitter at @clemensv.

For further information on GIDS 2011, please visit the summit on the web http://www.developersummit.com/


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) posted Introducing the Amazon Simple Email Service on 1/24/2011:

image

image Like most technical endeavors, sending email is a lot harder than it looks! The simple solutions that are entirely adequate when you have to send a couple of dozen daily emails simply don't work when you need to send out hundreds, thousands, or even millions of emails over the same time period.

imageTo pick just one issue, let's talk about something that the email pros call deliverability. This is a measure of how well you are doing at actually getting email through to the intended recipients. It is affected by a number of factors and is absolutely crucial to the successful implementation of your email strategy. In order to maximize deliverability, you need to work with multiple Internet Service Providers (ISPs) to ensure that they trust your content, and you need to monitor and control the number of complaints and bounces that you generate. After spending time and money building out an internal hardware platform, negotiating expensive agreements with third-party vendors, learning the ins and outs of SMTP, and becoming familiar with (not to mention gaining control of) all of the factors that affect deliverability, you may find yourself wondering why you wanted to send email in the first place. Amazon's Chris Wheeler (Technical Program Manager for Amazon SES and an industry recognized deliverability expert), told me "You must get your house in order before sending your first piece of mail. Otherwise, your mail will end up in the spam folder, may not be delivered at all, or you may be blocked from sending any more mail."

The new Amazon Simple Email Service (SES) will make it easy for you to send email with minimal setup and maximum scalability. Amazon SES is based on the scalable technology used by Amazon sites around the world to send billions of messages a year.

You'll be able to send email without having to worry about the undifferentiated heavy lifting of infrastructure management, configuring your hosts for optimal sending, and the like. Amazon SES also provides you with access to a number of metrics that will provide you with the feedback needed to tune your email strategy to maximize deliverability.

When you first register, you'll have access to the SES "sandbox" where you can send email only to addresses that you have verified. The verification process sends a confirmation email to the address to be verified; the recipient must click on a link embedded in the email in order to verify the address. You must also verify the email address (or addresses) that will be used to send messages. At this point, with verified addresses in hand, you can send up to 200 messages per day, at a maximum rate of 1 message per second. These limits will allow you to develop and test your application. This address verification process is intended to allow you to develop and debug your application in a controlled environment. It will also help to maintain your reputation as a sender of high quality email.

Once your application is up and running, the next step is to request production access using the SES Production Access Request Form. We'll review your request and generally contact you within 24 hours. Once granted production access, you will no longer have to verify the destination addresses and you'll be able to send email to any address. SES will begin to increase your daily sending quota and your maximum send rate based on a number of factors including the amount of email that you send, the number of rejections and bounces that occur, and the number of complaints that it generates. This will occur gradually over time as your activities provide evidence that you are using SES in a responsible manner. The combination of sandbox access, production access, and the gradual increase in quotas will allow us to help ensure high deliverability for all customers of SES. This is exactly the same process that bulk senders of email, including Amazon.com, use to “season” their sending channels.

Newly verified production accounts can send up to 1,000 emails every 24 hours. The SES team has told me that, while it might vary over time, with responsible use this quota currently can grow to 10,000 daily messages within 3 days and up to 1,000,000 daily messages within a couple of weeks. Similarly, the maximum send rate will start out at 1 email per second, and can rise to 10 per second within 3 days, and all the way to 90 per second within a couple of weeks. These increases are based on usage, and occur automatically as you start to approach your existing limits. You can contact us if you need to send more than 1,000,000 emails per day or at a rate in excess of 90 per second and we'll do our best to accommodate you.

The Simple Email Service will provide you with performance data on your email so that you can track your status and adjust your email sending model if necessary. SES will also provide you with valuable feedback from ISPs in the form of complaints from email recipients.

You can use SES by calling the SES APIs or from the command line. You can also configure your current Mail Transfer Agent to route your email through SES using the directions contained in the SES Developer Guide.

The SES APIs are pretty simple:

  • You use VerifyEmailAddress, ListVerifiedEmailAddresses, and DeleteVerifiedEmailAddress to manage the list of verified email addresses associated with your account.
  • You use SendEmail to send properly formatted emails (supplying From, To, Subject and a message body) and SendRawEmail to manually compose and send more sophisticated emails which include additional headers or MIME data.
  • You use GetSendQuota and GetSendStatistics to retrieve your sending quotas and your statistics (delivery attempts, rejects, bounces, and complaints).

You can read more about the APIs in the SES API Reference.

I'm confident that you will find SES to be easy to use and that it will save you a lot of time and a lot of headaches as you build and deploy your applications. As always, please feel free to leave me a comment with your thoughts (and plans) for SES.

By the way, if you planning to use SES, you might also want to investigate Litmus.This AWS-powered email preview tool will let you see how your email will look when it is rendered by 34 separate email clients and devices including four versions of Lotus Notes and five versions of Microsoft Outlook. Litmus runs on Amazon EC2 and stores the rendered images on Amazon S3 (read the Litmus case study for more information). …

PS - There's a secret message embedded within the SES "hero graphic" on the AWS home page! Can you figure out what it says? If you are having trouble reading it, use your browser's zoom function (press the Control and + keys on Firefox). If that isn't good enough for you, here's a slightly larger version of the graphic.

Hopefully, the Windows Azure team is working on a similar feature.


David Linthicum posited “Salesforce.com, Google App Engine, Microsoft Azure, and the rest face a tough threat from Amazon.com's 'give the development platform away' strategy” as a deck for his Elastic Beanstalk solves the real issue of cloud platforms: Resource scaling post of 1/25/2011 to InfoWorld’s Cloud Computing blog:

image You had to know that Amazon.com would get into the platform cloud service business at some point, and last week marked that very occasion. Amazon Web Services (AWS) Elastic Beanstalk promises to simplify the creation, deployment, and operations of Web applications as they scale, with Amazon.com provisioning and configuring the necessary AWS resources, such as EC2 instances, Elastic Load Balancer, and Auto Scaling Group.

imageYou can think of AWS Elastic Beanstalk as Apache Tomcat on demand for supporting Java development in the cloud. Amazon.com is offering loss-leader pricing in which you pay only if you use AWS infrastructure services. It's almost as if Amazon.com were selling Ginzu knives, not platform services, and Elastic Beanstalk is the free pair of scissors in the deal.

Who gets sand kicked in the face? Salesforce.com's Force.com cloud platform comes to mind. Salesforce.com can read the tea leaves and is gearing up for a fight. Salesforce.com recently paid $212 million for Heroku, a cloud platform for Ruby developers that should be a much better solution for developers than the proprietary Apex language that Salesforce.com now offers. Of course Google App Engine is the most direct competition to Beanstalk, as it too supports Java. App Engine supports Python as well. Plus, don't forget about Microsoft Azure for .Net developers and the smaller cloud platform providers such as Engine Yard, which provides Ruby development in the cloud.

With all that competition, why do I believe Amazon.com will drive Elastic Beanstalk for the win? The core concern for platform providers is having auto-expandable storage and compute resources for cloud applications. The development platform itself is less of an issue. AWS has already solved the resource-scaling problem. Enterprises love free, and most companies will appreciate the idea of cloud providers giving away the development services at no cost, charging for only the resources used, and presenting a big bill only when the application goes into production.

The only way a Java application runs free under AWS Elastic Beanstalk is if it falls within the limits of AWS’s Free Usage Tier.


Lydia Leong chimes in again about Amazon Web Services in her Amazon’s Elastic Beanstalk post of 1/24/2011:

image Amazon recently released a new offering called the Elastic Beanstalk. At its heart, it is a simplified interface to EC2 and its ancillary services (load-balancing, auto-scaling, and monitoring integrated with alerts), along with an Amazon-maintained AMI containing Linux and Apache Tomcat (an open source Java EE application server), and a deployment mechanism for a Java app (in the form of a WAR file), which notably adds tighter integration with Eclipse, a popular IDE.

imageMany people are calling this Amazon’s PaaS foray. I am inclined to disgree that it is PaaS (although Amazon does have other offerings which are PaaS, such as SimpleDB and SQS). Rather, I think this is still IaaS, but with a friendlier approach to deployment and management. It is developer-friendly, although it should be noted that in its current release, there is no simplification of any form of storage persistence — no easy configuration of EBS or friendly auto-adding of RDS instances, for example. Going to the database tab in the Elastic Beanstalk portion of Amazon’s management console just directs you to documentation about storage options on AWS. Almost no one is going to be running a real app without a persistence mechanism, so the Beanstalk won’t be truly turnkey until this is simplified accordingly.

Because Elastic Beanstalk fully exposes the underlying AWS resources and lets you do whatever you want with them, the currently-missing feature capabilities aren’t a limitation; you can simply use AWS in the normal way, while still getting the slimmed-down elegance of the Beanstalk’s management interfaces. Also notably, it’s free — you’re paying only for the underlying AWS resources.

Amazon exemplifies the idea of IT services industrialization, but in order to address the widest possible range of use cases, Amazon needs to be able to simplify and automate infrastructure management that would otherwise require manual work (i.e., either the customer needs to do it himself, or he needs managed services). I view Elastic Beanstalk and its underlying technologies as an important advancement along Amazon’s path towards automated management of infrastructure. In its current incarnation, it eases developer on-boarding — but in future iterations, it could become a key building-block in Amazon’s ability to serve the more traditional IT buyer.

I agree with Lydia on the point she raises in this post, but not on her placement of AWS in her recently issued Gartner Magic Quadrant for IaaS and Web Hosters.


William Vambenepe (@vambenepe) continued his analysis of REST versus RPC APIs in a Cloud APIs are like military parades post of 1/24/2011:

image The previous post (“Amazon proves that REST doesn’t matter for Cloud APIs”) attracted some interesting comments on the blog itself, on Hacker News and in a response post by Mike Pearce (where I assume the photo is supposed to represent me being an AWS fanboy). I failed to promptly follow-up on it and address the response, then the holidays came. But Mark Little was kind enough to pick the entry up for discussion on InfoQ yesterday which brought new readers and motivated me to write a follow-up [See below].

Mark did a very good job at summarizing my point and he understood that I wasn’t talking about the value (or lack of value) of REST in general. Just about whether it is useful and important in the very narrow field of Cloud APIs. In that context at least, what seems to matter most is simplicity. And REST is not intrinsically simpler.

It isn’t a controversial statement in most places that RPC is easier than REST for developers performing simple tasks. But on the blogosphere I guess it needs to be argued.

Method calls is how normal developers write normal code. Doing it over the wire is the smallest change needed to invoke a remote API. The complexity with RPC has never been conceptual, it’s been in the plumbing. How do I serialize my method call and send it over? CORBA, RMI and SOAP tried to address that, none of them fully succeeded in keeping it simple and yet generic enough for the Internet. XML-RPC somehow (and unfortunately) got passed over in the process.

image So what did AWS do? They pretty much solved that problem by using parameters in the URL as a dead-simple way to pass function parameters. And you get the response as an XML doc. In effect, it’s one-half of XML-RPC. Amazon did not invent this pattern. And the mechanism has some shortcomings. But it’s a pragmatic approach. You get the conceptual simplicity of RPC, without the need to agree on an RPC framework that tries to address way more than what you need. Good deal.

So, when Mike asksDoes the fact that AWS use their own implementation of an API instead of a standard like, oh, I don’t know, REST, frustrate developers who really don’t want to have to learn another method of communicating with AWS?” and goes on to answer “Yes”, I scratch my head. I’ve met many developers struggling to understand REST. I’ve never met a developer intimidated by RPC. As to the claim that REST is a “standard”, I’d like to read the spec. Please don’t point me to a PhD dissertation.

That being said, I am very aware that simplicity can come back to bite you, when it’s not just simple but simplistic and the task at hand demands more. Andrew Wahbe hit the nail on the head in a comment on my original post:

Exposing an API for a unique service offered by a single vendor is not going to get much benefit from being RESTful.

Revisit the issue when you are trying to get a single client to work across a wide range of cloud APIs offered by different vendors; I’m willing to bet that REST would help a lot there. If this never happens — the industry decides that a custom client for each Cloud API is sufficient (e.g. not enough offerings on the market, or whatever), then REST may never be needed.

Andrew has the right perspective. The usage patterns for Cloud APIs may evolve to the point where the benefits of following the rules of REST become compelling. I just don’t think we’re there and frankly I am not holding my breath. There are lots of functional improvements needed in Cloud services before the burning issue becomes one of orchestrating between Cloud providers. And while a shared RESTful API would be the easiest to orchestrate, a shared RPC API will still be very reasonably manageable. The issue will mostly be one of shared semantics more than protocol.

Mike’s second retort was that it was illogical for me to say that software developers are mostly isolated from REST because they use Cloud libraries. Aren’t these libraries written by developers? What about these, he asks. Well, one of them, Boto‘s Mitch Garnaat left a comment:

Good post. The vast majority of AWS (or any cloud provider’s) users never see the API. They interact through language libraries or via web-based client apps. So, the only people who really care are RESTafarians, and library developers (like me).

Perhaps it’s possible to have an API that’s so bad it prevents people from using it but the AWS Query API is no where near that bad. It’s fairly consistent and pretty easy to code to. It’s just not REST.

Yup. If REST is the goal, then this API doesn’t reach it. If usefulness is the goal, then it does just fine.

Mike’s third retort was to take issue with that statement I made:

The Rackspace people are technically right when they point out the benefits of their API compared to Amazon’s. But it’s a rounding error compared to the innovation, pragmatism and frequency of iteration that distinguishes the services provided by Amazon. It’s the content that matters.

Mike thinks that

If Rackspace are ‘technically’ right, then they’re right. There’s no gray area. Morally, they’re also right and mentally, physically and spiritually, they’re right.

Sure. They’re technically, mentally, physically and spiritually right. They may even be legally, ethically, metaphysically and scientifically right. Amazon is only practically right.

This is not a ding on Rackspace. They’ll have to compete with Amazon on service (and price), not on API, as they well know and as they are doing. But they are racing against a fast horse.

More generally, the debate about how much the technical merits of an API matters (beyond the point where it gets the job done) is a recurring one. I am talking as a recovering over-engineer.

In a post almost a year ago, James Watters declared that it matters. Mitch Garnaat weighed on the other side: given how few people use the raw API we probably spend too much time worrying about details, maybe we worry too much about aesthetics, I still wonder whether we obsess over the details of the API’s a bit too much (in case you can’t tell, I’m a big fan of Mitch).

Speaking of people I admire, Shlomo Swidler (in general, only library developers use the raw HTTP. Everyone else uses a library) and Joe Arnold (library integration (fog / jclouds / libcloud) is more important for new #IaaS providers than an API) make the right point. Rather than spending hours obsessing about the finer points of your API, spend the time writing love letters to Mitch and Adrian so they support you in their libraries (also, allocate less of your design time to RESTfulness and more to the less glamorous subject of error handling).

OK, I’ll pile on two more expert testimonies. Righscale’s Thorsten von Eicken (the API itself is more a programming exercise than a fundamental issue, it’s the semantics of the resources behind the API that really matter) and F5′s Lori MacVittie (the World Doesn’t Care About APIs).

Bottom line, I see APIs a bit like military parades. Soldiers know better than to walk in tight formation, wearing bright colors and to the sound of fanfare into the battlefield. So why are parade exercises so prevalent in all armies? My guess is that they are used to impress potential enemies, reassure citizens and reflect on the strength of the country’s leaders. But military parades are also a way to ensure internal discipline. You may not need to use parade moves on the battlefield, but the fact that the unit is disciplined enough to perform them means they are also disciplined enough for the tasks that matter. Let’s focus on that angle for Cloud APIs. If your RPC API is consistent enough that its underlying model could be used as the basis for a REST API, you’re probably fine. You don’t need the drum rolls, stiff steps and the silly hats. And no need to salute either.

Related posts:

  1. Amazon proves that REST doesn’t matter for Cloud APIs
  2. Separating model from protocol in Cloud APIs
  3. REST in practice for IT and Cloud management (part 1: Cloud APIs)
  4. Waiting for events (in Cloud APIs)
  5. Dear Cloud API, your fault line is showing
  6. Toolkits to wrap and bridge Cloud management protocols

I’d be willing to bet that fewer than 1% of Azure developers abandon the StorageClient library and write their own REST API calls to Azure Tables. (Mike Amundsen (@mamund) comes to mind as an Azure RESTafarian.)


Mark Little leaves open Is REST important for Cloud? in a 1/23/2010 post to InfoQ:

image Over the years we have heard a lot about the benefits of REST for Web-based developments, particularly in the context of Web Services and more recently in terms of its relevancy to SOA. Therefore with the increase in use of Cloud, even if only at the early adopter stages, it was no surprise to see REST being adopted by various implementations.

image Back in 2009, William Vambenepe considered the role of REST in Cloud and concluded that at that point in time Sun and Rackspace had APIs that were more RESTful than others. Well not quite two years later, with more development experience, more users and more choice, William looks at probably the most successful Cloud provider to date, Amazon, and asks the question (we paraphrase) "If Amazon doesn't use REST is it really necessary for successful Cloud?" As he says:

Every time a new Cloud API is announced, its “RESTfulness” is heralded as if it was a MUST HAVE feature. And yet, the most successful of all Cloud APIs, the AWS API set, is not RESTful.

Now you may disagree with William's assessment that we are far enough into using Cloud to assume this is not a coincidence, but it is an interesting situation to consider. Furthermore, William isn't suggesting that REST is not important, only that at least as far as Cloud management is concerned it simply is not important and does not offer any appreciable benefits over, say, RPC.

AWS mostly uses RPC over HTTP. You send HTTP GET requests, with instructions like ?Action=CreateKeyPair added in the URL. Or DeleteKeyPair. Same for any other resource (volume, snapshot, security group…). Amazon doesn’t pretend it’s RESTful, they just call it “Query API” (except for the DevPay API, where they call it “REST-Query” for unclear reasons).

As he points out, the lack of RESTful APIs has not stopped many people using it, nor has the scalability of deployed systems been limited or affected adversely. Neither has it impacted security or restricted the types of applications and languages that can make use of it.

Here’s a rule of thumb. If most invocations of your API come via libraries for object-oriented languages that more or less map each HTTP request to a method call, it probably doesn’t matter very much how RESTful your API is.

William points out that although an earlier article from the engineers at Rackspace comparing and contrasting the RESTfulness of their API with AWS is accurate, it doesn't seem to make a difference where it counts: with developers and users. In conclusion, he suggests that ultimately the RESTfulness of Cloud (at least Cloud management) does not matter as much as the simplicity.

The AWS API being an example of the latter without the former. As I wrote in my review of the Sun Cloud API, “it’s not REST that matters, it’s the rest”. One and a half years later, I think the case is closed.

His article has attracted many comments, most of whom disagree. For instance, one commenter states:

What you say isn’t false to my mind, although the success of EC2 has very little to do with their API, as John (comment #3) points out too. More generally though I’m not sure I like the sound of all of this. Would the Web be here today if Tim Berners-Lee had designed an interface – specific to physicists – allowing them to share results once they had installed some specific “collaboration software for physicists” on their machines ?

Another adds:

Your questions are great but they totally miss the mark: these details (like RESTful APIs) primarily affect library developers, and good libraries can abstract away any kind of API into something more resource-oriented. That requires someone able to map concepts effectively and consistently. Once good libraries are available, there’s not a great barrier for adoption and excessive misuse and waste are completely coincidental. It always comes down to consistency: as long as you’re able to provide a consistent interface, even if it’s not consistent with previously established concepts and mappings, an API will likely succeed given the product is worth the (extra) effort.

With more a more Cloud implementations being developed by vendors and open source efforts, their RESTfulness is usually mentioned as an important feature. But William's question remains: if the most successful Cloud vendor to date does not use REST, does it really matter?


Joab Jackson reported Forrester: Oracle Confining Java Future to Enterprise Use in a 1/24/2011 article for PCWorld magazine:

image Oracle is limiting the development of Java by focusing the future development of the language on enterprise use, to the detriment of a wider, more diverse Java community, charged a pair of analysts at Forrester Research.

image "Sun had a very broad focus for Java, including enterprise middleware but also PCs, mobile devices, and embedded systems. Oracle's focus will be on enterprise middleware first and foremost, because that's where the money is," concluded the report, authored by Forrester analysts Jeffrey Hammond and John Rymer.

imageAs a result, Java may lose some of its prominence among the general worldwide development community, as it becomes more regarded as a specialized server-side language for Oracle and IBM enterprise customers, the duo warn.

Since Oracle announced the purchase of Sun Microsystems, completed a year ago, Oracle CEO Larry Ellison has frequently praised Sun's Java programming language as one of the most valuable assets that came with the acquisition.

But that high regard may not extend to Java as a general-purpose programming language. Certainly some of Oracle's movements since the purchase point to a more restricted use.

Although most of the Java specification is open source, Oracle maintains tight control over open-source variants through its ownership of the Java trademark, the analysts contend. It also maintains a strong hand over the JCP (Java Community Process), the independent body overseeing Java development.

In December, the Apache Software Foundation withdrew its participation in the JCP in protest of some of Oracle's licensing decisions surrounding Java. Oracle subsequently asked ASF to reconsider its departure, though to no avail.

"Losing The Apache Software Foundation as a supporter ... hurts Oracle's credibility as a partner with the Java alpha geeks who drive so much independent and discontinuous Java innovation," the analysts wrote in a blog post announcing the report.

In lieu of ASF support, Oracle seems to be courting IBM instead, throwing its weight behind the IBM-backed OpenJDK open-source Java implementation. In a related report also just released, Rymer praised IBM's WebSphere 7 as the most robust heavyweight Java application server.

Another factor the analysts pointed out is that Oracle is also not addressing one of Java's current weaknesses, namely its complexity. This complexity may be driving developers to more readily consider other alternatives for internal or cloud use, such as Microsoft's .NET platform or Ruby on Rails. This complexity is also spurring the development of external frameworks, such as Spring, which further diverts outside developer attention away from core Java work.

To formulate the report, the Forrester analysts interviewed 12 organizations directly involved with Java, including Oracle, IBM, Red Hat, Microsoft and the ASF. They also surveyed the thoughts of Java users through the comment section of Forrester's blog site, and by person at events such as JavaOne.

Oracle declined to comment on the report.


Alex Popescu (@al3xandru) posted Hive and HBase in Toad for Cloud Demo on 1/24/2011 to his my NoSQL blog:

image Jeremiah Peschka put together two short videos demoing Toad for Cloud Eclipse plugin with Hive and HBase. Those complaining about lack of SQL in NoSQL databases should check it out. On a different note though, I did express a few concerns about such a tool related to the complexity and performance of building the indirection layer and supporting operations that are not native to target system. I’d add to these the fact that some NoSQL databases are continuously adding features that can radically change the way this tool performs (e.g. Cassandra 0.7 will feature secondary indexes).

image Looking at how slow the tool performs and the fact that it doesn’t have any sort of results pagination seems to be a confirmation of some of the concerns expressed above. On the other hand it is kind of difficult to understand a tool by just watching a video.

I’m still waiting for the Windows Azure Storage team to deliver the promised secondary indexes for Azure Tables.


<Return to section navigation list> 

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, AWS Elastic Beanstalk, Amazon Simple Email Services, Amazon SES, REST, RPC, NoSQL Cassandra, Hive, HBase

0 comments: