Wednesday, May 05, 2010

Windows Azure and Cloud Computing Posts for 5/5/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in May 2010. 

Off Topic: My Google Reports Real-Time Stats Experiments “No Longer Available” post of 5/5/2010 reports that links in the Google Real-Time Blogs and Stats For Windows Azure and SQL Azure section of the left frame point to pages that indicated Google is about to shut down these features. Links will be removed when they no longer return meaningful results.

Azure Blob, Table and Queue Services

Eugenio Pace’s Windows Azure Guidance – Failure recovery – Part III (Small tweak, great benefits) post of 5/5/2010 describes a big improvement to the code in in his Windows Azure Guidance – Failure recovery and data consistency – Part II post of 5/4/2010:

In the previous post, my question was about a small change in the code that would yield a big improvement. The answer is:

image

What changed?

  1. No try / catch
  2. We reversed the order of writes: first we write the details, then we write the “header” or “master” record for the expense.

If the last SaveChanges fails, then there will be orphaned records and images, but the user will not see anything (except for the exception), and presumably would re-enter the expense report. In the meantime a background process will eventually clean up everything. Simple and efficient.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

David Robinson explains why not to try SELECT INTO With SQL Azure in this 5/4/2010 post to the SQL Azure team blog:

SQL Azure requires that all tables have clustered indexes therefore SELECT INTO statements, which creates a table and does not support the creation of clustered indexes. If you plan to migrate to SQL Azure, you need to modify your code to use table creation instead of the SELECT INTO Statement. This goes for both temporary tables (where clustered indexes are not required) and permanent tables.

Quick query to get some sample data to use:

CREATE TABLE Source (Id int NOT NULL IDENTITY, [Name] nvarchar(max), [Color] nvarchar(10),  
CONSTRAINT [PK_Source] PRIMARY KEY CLUSTERED 
(
      [Id] ASC
))
      
INSERT INTO Source([Name], [Color]) VALUES ('Shirt','Red')
INSERT INTO Source([Name], [Color]) VALUES ('Pants','Red')
INSERT INTO Source([Name], [Color]) VALUES ('Tie','Pink')

Here is some example code that uses the SELECT INTO statement for a temp table:

SELECT *
INTO #Destination
FROM Source
WHERE [Color] LIKE 'Red'
-- Do Something

DROP TABLE #Destination

This is going to fail in SQL Azure with this error:

Msg 40510, Level 15, State 2, Line 1

Statement 'SELECT INTO' is not supported in this version of SQL Server.

To work around this you need to create your destination table then call INSERT INTO. Here is an example:

CREATE TABLE #Destination (Id int NOT NULL, [Name] nvarchar(max), [Color] nvarchar(10))

INSERT INTO #Destination(Id, [Name], [Color])
SELECT Id, [Name], [Color]
FROM Source
WHERE [Color] LIKE 'Red';

-- Do Something

DROP TABLE #Destination

Dave is a technical editor for my Cloud Computing with the Windows Azure Platform book.

Marcello Lopez Ruiz explains Query Projections in WCF Data Services in this 5/4/2010 post:

The MSDN documentation on the topic does a pretty good job at giving you a high-level view of what happens when you use select to do a projection of the query results, and calls out a number of things that aren't supported when projecting entity types. It looks like projecting into anonymous types is much more powerful - how come?

There is a very important principle that cost a lot of work to enforce, and that is that we'll try our best to make sure that you don't use projections in a way that loses data when updating. Big, huge thing for us. The following statement from the MSDN page should be read carefully.

When updates are made to a projected type that does not contain all of the properties of the entity in the data model of the data service, existing values not included in the projection on the client will be overwritten with uninitialized default values.

That said, I'll cover a few of the things that you can do with projected entity types that are quite useful and may not be obvious in the following posts.

T10 Media explains the hard way of Migrating A SQL Server Database To SQL Azure in this 5/4/2010 post to the Azure Support blog:

SQL Azure is essentially a cut down version of SQL Server and so we would expect that migrating from SQL Server to SQL Azure should be a straightforward task. However, in the first release of SQL Azure, the scripts generated by SQL Server Management Studio will require some extra cleanup since not all the SQL Server 2008 features are supported in SQL Azure.

For this demo we will use SQL Server Management Studio (SSMS) to generate the SQL scripts and migrate an existing database from SQL Server 2008 to SQL Azure. It should be noted that there are several tools such as the SQL Migration Wizard for assisting in the migration, but in this article we will look at performing a manual migration.

It’s far simpler and quicker to use George Huey’s SQL Azure Migration Wizard to handle the transfer of structure and data, as described in My Using the SQL Azure Migration Wizard v3.1.3/3.1.4 with the AdventureWorksLT2008R2 Sample Database of 1/23/2010, which explains how to use a recent update of the SQL Azure MW to generate an AdventureWorks Lite database. This is the simplest approach to creating a database to test “one-click” OData for SQL Azure.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Brent Stineman’s (@BrentCodeMonkey) long-awaited Azure AppFabric – A bridge going anywhere post of 5/5/2010 about port bridging with the AppFabric’s Service Bus begins:

Yeah, I know. I said my next posts were going to be more articles about the Windows Azure Diagnostics. Course I also thought I’d have those up a week after the first. Life has just been too hectic of late I guess. I’d love to admit I’m back of my own free will but I feel I have a debt that needs to be paid.

So I’ve spent the last two months working this Azure POC project. Nearly everything was done except for one important point, connecting a hosted application to an on-premise SQL Server. The best practice recommendations I’d been seeing for the last year on the MSDN forums always said to expose your data tier via a series of web services that your hosted application could then consume. You could secure those services however you saw fit and thus help ensure that your internal database remained secure.

Problem is if your application is already coded for a direct database connection, you’re going to have to rework it to now use services. In my case, I didn’t have the time and I wasn’t very excited about the degree of risk reworking all that code was going to introduce to my project. So I wanted something that I could implement that required only a configuration string change. Alexsey, the MSFT architect that was providing guidance on this project suggested I check out a blog post by Clemens Vasters.

That article was about using the Azure AppFabric to create a port bridge between two processes. Many of you are likely familiar with the concept of port tunneling. Heck, most of us have likely used this technique to bypass firewalls and web proxies for both legitimate and more dubious reasons. This process allows you to route traffic that would normally go through one port to a different port on another machine that in turn reroutes the connection back to the original port we wanted. This differs from an Azure AppFabric port bridge in that like other AppFabric connections, both ends of our bridge make outbound connections to the AppFabric’s relay to establish the connection.

Going out the in door

Its this outbound connection that is so powerful. Those security guys we all love to hate and slander pay close attention to what they allow to come into their networks. But they’re usually pretty forgiving about outbound connections. They assume if we’re permitted on their network, we should be allowed a bit of leeway to reach out. Even in more hardened networks, its easier to get a request to allow an out-bound connection approved then it is a request the opening of a new in-bound connection.

Its this out-bound connection that will be our to what lies behind the firewall. You see, we’re just interested in establishing a connection. Once that’s done, traffic can go bi-directionally through it. Heck, we can even multi-plex the connections that multiple requests can operate simultaneously on the same connections.

How it works

Clemens has done a great job in his blog post and its sample code of getting much of the work out of our way. You’ve got a packaged service host that will run either as a service or a console application. You’ve also got a client application (and there’s a Windows Azure worker role out there that will be posted soon as well I hope). Both of these build on some almost magical connection management logic build on top of a nettcprelaybinding. But here’s Clemens’ picture of it:

image

Nice picture but it needs some explanation. The bridge agent opens a specified port (in the case of this diagram, 1433 for SQL Server) and monitors for connections to that port. Each connection gets accepted and its contents handed off to the Azure AppFabric. On the other end we have the Port Bridge Service Host. This host registered our AppFabric service bus endpoint and also looks for any inbound messages. When these messages are received, it will spin up a connection to the requested resource inside the firewall (at its lowest level this is just an instance of a tcpClient object) and pass the data to it. Return messages come back in just the same manner.

Now this is where the magic starts to kick in. You’d think that with all these moving parts that the connection would be pretty slow. And admittedly, there’s a bit of overhead in getting the connection established. But like any tcp based connection, once the “pipe” has been established, the overhead of moving information through all this isn’t much greater then just routing through multiple firewalls/routers. In fact, you can look at our agents and the bus as just software based routers. :D

Brent continues with detailed “Back to the problem at hand, my SQL connection” and “Not very deep, but there it is” topics and notes that source code is on his “list for another day.” He also lavishes praise on Microsoft’s Alexsey Savateyev and Wade Wegner for assistance in getting his project working.

Wade Wegner asks What is the Azure AppFabric? and provides the detailed answer in this 5/5/2010 post:

If you take a look at the official Windows Azure platform website, you’ll see two definitions for the Windows Azure platform AppFabric (hereafter referred to as the Azure AppFabric) prominently called out:

“… connects cloud services and on-premises applications.”

“… helps developers connect applications and services in the cloud or on-premises.”

While the purpose of the Azure AppFabric seems clear to me – enable developers to connect applications and services – there are a couple things that generally cause confusion: execution and branding.  I plan to talk about how to use the Azure AppFabric quite a bit in the future, but in this post I want to address the branding.

The Azure AppFabric has been rebranded numerous times.  This isn’t surprising given that it has largely been a community technology preview, but it has lead to some confusion.

So, some brief history …

Note: this is based entirely on my cyber-sleuthing and personal experience.  I’m sure I have gaps and perhaps an inadvertent inaccuracy, so as I get corrected I’ll update.  I didn’t join Microsoft until early 2008, so the early days of the Azure AppFabric precedes my Microsoft employment.

In April 2007, the BizTalk Server team announced that the CTP release of BizTalk Services was live.  They had created an Internet Services Bus (ISB) that allowed developers to create “Internet scale composite applications more rapidly.”  Clemens Vasters described this new ISB in a post.  Later, in July 2007, the BizTalk Server team talked about Hosted Workflows in BizTalk – an exciting extension to the ISB announcement.  Over time, Access Control was added into the mix as well.

Soon, word of Project Zurich started hitting the airways.  Mary-Jo Foley wrote about “’Zurich,’ Microsoft’s elastic cloud” back in July 2008, describing it as an initiative to “extend Microsoft’s .NET application development technologies to the Internet ‘cloud.’” Close, but not quite right.  My first introduction to Project Zurich came while working on a project with RedPrairie on a supply chain proof-of-concept, that ultimately culminated in a Bob Muglia keynote demonstration at PDC 2008 (around 59 minutes).

At the Professional Developers Conference (PDC) 2008 the platform was rebranded .NET Services and included as part of the Azure Services Platform.  You can actually still see some of the .NET Services branding on this BizTalk Service page.  By the fall of 2008, .NET Services had emerged as a mature platform (even though still in CTP) consisting of an Internet Service Bus, an Access Control Service (ACS), and Workflow Service.  In June 2009, the .NET Services team announced that they were pulling the Workflow Service.  As Windows Workflow Foundation in .NET 4.0 evolved, it was clear that most customers wanted Workflow Services to also follow to the .NET 4.0 model (not .NET 3.5), which it was not.  Consequently, .NET Services team pulled workflow and focused on the ISB and ACS.

At PDC 2009, .NET Services went through it’s most recent branding change, and was eventually launched in 2010 as the Windows Azure platform AppFabric.  Of course, this is a really long name, so most people just end up saying Windows Azure AppFabric or just Azure AppFabric

The biggest challenge I see today with the name is that, at PDC 2009, we also rebranded “Dublin” and “Velocity” as the Windows Server AppFabric – almost too much name overloading, although there are some good reasons for it that will emerge over time.  To make things clear, I’ll always say either Azure AppFabric or Server AppFabric.

If you really take a look at how this all has evolved, you can start to see how Microsoft’s cloud platform strategy has evolved over the last several years.

So, where does this leave us?

In my opinion, it leaves us with a technology that is a key differentiator in Microsoft’s cloud platform.  I’m not just saying this – I really believe it, or I wouldn’t be moving my family up to Redmond so that I can focus on it.

In closing, let’s be clear on two things – in Azure AppFabric, there’s both a Service Bus and Access Control Service.

The Service Bus is an Internet-scale enterprise service bus that makes it easy to connect applications over the Internet. Services that register on the Service Bus can easily be discovered and accessed across any network topology.

The Access Control Service helps you build federated authorization into your applications and services, without the complicated programming that is normally required to secure applications that extend beyond organizational boundaries.

Okay, now that I’ve spent a little time  covering some history and the past, expect to see a major focus on what you can do today – and lots of code and examples.

The AD FS 2.0 Product Team announced AD FS 2.0 is here! a few hours after Vibro (below) in this 5/5/2010 post to the “Geneva” Team blog:

We are very happy to announce the general availability of AD FS 2.0! It is our pleasure to offer this release for Windows Server 2008 and 2008 R2 that makes it easier to work across companies, leverage the cloud, and develop secure applications all while using industry standard interoperable protocols. We listened to your feedback from the release candidate and have made AD FS 2.0 even easier to manage by simplifying proxy management. Finally, we’ve hammered this build to ensure you’ll see the rock solid reliability and screaming fast performance that you’d expect from Microsoft.

The setup package for AD FS 2.0 can be downloaded here.

The team behind making AD FS 2.0 can be seen in several Channel 9 videos discussing the features and capabilities of the release.

Check out the following resources to learn more about AD FS 2.0:

We’d like to give a big thank you to everyone who’s helped us by providing feedback since we had our first Beta. Stay tuned here as we will continue to blog about AD FS 2.0 features over the coming weeks and months. If you have questions, don’t hesitate to hop on the forum and ask.

See how you can use claims to unleash the power of your identity infrastructure by deploying AD FS 2.0 today!

Vittorio Bertocci exclaims ADFS 2.0 Ships! in this 5/5/2010 post:

Administrators around the globe, rejoice: Active Directory Federation Services 2.0 has been released!

On the IdElement [Channel9 site] you can find the customary special on the event:

Active Directory Federation Services v2 Ships!: Share

imageToday we are announcing the general availability of Active Directory Federation Services 2.0 (ADFS 2.0).

In this lightning-fast video, Stuart Kwan, Group Program Manager for the Identity and Security Division, lists the essential facts about this new, exciting release.
Tune in!

How ADFS v2 Helps Microsoft IT to Manage Application Access: Share

ADFS 2.0 is being released today, but there is a group that has been using it for almost two years: Microsoft's IT department, which dogfooded ADFS 2.0 from the very first pre-release.

imageBrian Puhl, Principal System Architect, and Femi Aladesulu, Service Engineer, share their vast experience in using ADFS 2.0, which they earned handling access to the Microsoft IT application portfolio on premises and in the cloud.

From the topology of Microsoft's internal ADFS 2.0 deployment to the description of how day-to-day operations (such as a new application's onboarding) are handled, Brian and Femi will take you on a whirlwind tour. Today, Microsoft IT is able to offer identity as a reliable, self-provisioned service. Tune in to get tips that will help you to achieve the same results!

ADFS v2: Tales from the Test Team: Share

imageHow do you test a world-class product like ADFS 2.0? Performance, stress testing, security, and the classic aspects of server testing need to be compounded with all supported topologies and platforms, as well as the 18 languages in which the product is available.

Summarizing in 30 minutes what happened during the many man-years that went into the product is not easy. In this video, ADFS team members Ramiro Calderon and Toland Hon share some of the project's most interesting challenges and discuss how they prevailed upon them while taking ADFS 2.0 from inception to release.

Congratulations to the team for a superb job.

From now on enabling claims-based identity on your environments is as easy as switching on a server role… it will change everything :-)

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Colin Melia’s Getting Started with PHP on Windows Azure post of 5/5/2010 parallels his recent screencast on DNRTV:

This post will get you started with PHP development in Visual Studio for deployment to Windows Azure.

Using the FastCGI capabilities of IIS you can run PHP application on IIS and Windows Azure (in your local Development Fabric on in the Windows Azure cloud).

So, here’s how to create a simple PHP application in Visual Studio 2010 on Windows 7.

If you want to see this in video, check out my screencast interview with Dot Net Rocks TV on Azure and go to point 48:40.

  • Download the latest Windows ZIP files from http://windows.php.net/download/ (currently 5.3.2).  You should get for the VC9 x86 Non Thread Safe version.  The FastCGI system on IIS makes the use thread-safe.  Unzip the files into a folder somewhere on your system and rename the folder "PHP".
  • Ensure you have everything for IIS and CGI (i.e. FastCGI) is activated on your system for local development. 
  • Install the latest Azure SDK (checking the system requirements) – currently 1.1 (Feb 2010).
  • Start Visual Studio 2010 (which must be in run as an Administrator for the current version of the SDK) and create a new VS Cloud Project…

Colin continues with an illustrated, step-by-step tutorial for completing the PHP-on-Azure project.

TechCentral.ie reports “Azure solution offers new savings and functionality” in its Mamut offers scalable cloud computing to SMBs post of 5/5/2010:

Mamut has strengthened its cloud offering for Irish SMBs with the latest version of Mamut One which now includes web based tools, Mamut Online Backup and Company Dashboard. These tools will allow companies to securely backup business critical data in the cloud and view an online status of important key figures.

The announcement comes in response to demands from small- and middle-sized businesses for scalable cloud computing solutions that directly benefit their business in terms of functionality, performance, price and accessibility.

Mamut online backup is a free tool as part of Mamut One which works automatically in the background and creates a secure copy of files the moment they are updated or changed. The data is encrypted on the PC and delivered securely to the server where the data remains encrypted. With the Company Dashboard, decision makers receive a web based tool that displays the current status of key figures such as revenue, costs, and liquidity.

"Cloud computing is becoming an increasingly popular technology amongst Irish SMBs and online access to business data and web based storage are key elements in our software and services strategy." said Luke Buckley, country manager for Mamut Ireland. "There are big opportunities for SMBs with the cloud, both in terms of reducing costs and for business growth."

Recent research by Microsoft, of which Mamut is a partner, has predicted a 19% rise in small to medium businesses using cloud solutions in some form.

Mamut Online Backup is the first solution based on Microsoft's cloud platform, Windows Azure. Mamut made the decision to extend its offerings by leveraging Windows Azure, which now covers all web based storage requirements for users' needs.

In addition to Mamut Online Backup and the Company Dashboard, the new version of Mamut One contains a range of improvements to ensure the solution simplifies the use and administration of software and services for small businesses. The solution includes improvements to the CRM offering within purchase management, subscription management, the inclusion of a new online help function, as well as updated tax rates.

Read more: http://www.techcentral.ie/article.aspx?id=14988#ixzz0n4th5a22

Ed Hansberry explains How Push Notification Works [with Azure] In Windows Phone 7 in this 5/5/2010 post to InformationWeek’s Over the Air blog:

Microsoft made a bold move by removing third party application multitasking from Windows Phone 7. This was especially surprising since multitasking has always been one of the features that separated it from its competition. Instead, it will rely on push notifications to simulate multitasking for many alerts from network enabled apps, which on a smartphone is just about all of them.

Keep in mind the OS itself does multitask, it just won't allow third party apps to do it. You'll be able to do whatever you want while your music plays on the Zune media player in the background, but you wouldn't be able to do the same with a third party player or a streaming app like Pandora Radio.

One of the built in apps that will run in the background is the Microsoft Push Notification Service (MPNS). Windows Phone Developer Blog has two blog posts, part 1 and part 2, that go into details on how the service works.

Microsoft's solution revolves around Windows Azure. Let's say you have a Twitter app and you want to be notified each time someone mentions your @twittername. As I understand it, your Twitter app registers with the Azure server and lets it know it wants to subscribe to mentions. MPNS gets alerted from Azure and the phone either shows notification toast - a popup, or the app tile on the home screen updates its display with new information. …

Ed continues with more details of ersatz multitasking with Azure-based alerts.

Katie Serignese observes App modernization gets boost from HP in this 5/5/2010 post to the SD Times blog:

With application modernization seemingly on the minds of many organizations, HP in late April announced two tools to help manage the process: HP Service Test Management (STM) 10.5 and HP Functional Testing 10.0.

Layered on top of HP’s Quality Center, its platform for managing quality through the entire application life cycle, STM 10.5 gives users the ability to look at their new modern application models. Architecture has moved from monolithic and single-siloed to being very component-oriented, said Kelly Emo, HP’s director of application product marketing.

In STM’s graphical testing environment, testers can see all dependent components, potential changes and what touches everything else in order to see where risk points are. This enables testers to know what and where to test, and what the dependencies are before porting legacy code to another system.

“When you have all these moving parts, it’s easy to lose track of what’s dependent on what,” Emo said. “This helps to understand the ripple effect.”

Aside from STM, HP added two features to its Functional Testing automated testing tool. In order to create automated tests using the latest versions of Web 2.0 frameworks (such as AJAX, Dojo or Silverlight), HP added the new Web 2.0 Feature Pack. To extend its tool out to other development frameworks, HP also added the Extensibility Accelerator. These two additions come with the tool free of charge.

The potential movement of legacy applications to the cloud is on of the primary drivers of app modernization.

Bill Zack reports “significant updates” to the Extreme Computing Group’s AzureScope project in his How fast is Azure? post of 5/4/2010:

The Microsoft Extreme Computing Group regularly publishes an extensive set of benchmarks on all aspects of Windows Azure performance at http://azurescope.cloudapp.net/Default.aspx.  (Note that it is itself a Windows Azure application.)

We blogged about this before, but there have been significant updates since then.

image

These benchmarks are intended to assist you in architecting and deploying your applications and services on Azure. Included in the benchmark suite are tests of data throughput rates, response times, and capacity.

Each benchmark is run against a variety of test cases designed to reflect common use scenarios for Azure development.

To my knowledge no other cloud vendor publishes such a comprehensive set of benchmarks!

Neither have I seen such detailed benchmarks from other PaaS vendors (or IaaS puveyors, for that matter.)

Frederic Lardinois expands on Ford’s SYNC interface technology in his Apps on Wheels: Developing Mobile Apps that Work at 70 MPH post of 5/3/2010 to the ReadWriteWeb:

ford_logo_may10.jpgWhen we talk about mobile apps today, chances are that we are mostly talking about apps for cell phones and - maybe - tablets. The latest trend in mobile apps, however, is apps for cars. One of the companies leading this trend in the U.S. is Ford, which just unveiled a number of apps that students at the University of Michigan created on top of Ford's platform.

Making Mobile Apps Work at 70 mpg

Earlier today, we got a chance to talk to K. Venkatesh Prasad, the group and technical leader of Ford's Infotronics Research and Advanced Engineering team. Ford unveiled its SYNC AppLink technology for controlling Android and Blackberry mobile apps through Ford's voice-driven SYNC interface last month, but as Prasad told us, the company is obviously also looking at mobile apps that are developed specifically for the car.

Cloud Computing in the Commute

ford_apps_recommendation.jpgAs Prasad stressed when we talked to him, developers have gotten very good at developing apps that work well at 0 mph, but interfaces that also work well at 70 mph are still in their infancy. Apps that run in cars obviously have to overcome a number of issues - especially with regards to safety - that aren't normally an issue for developers of mobile apps.

In order to tap into the creativity of students who grew up with mobile apps and social networks, Ford, together with Microsoft and Intel, teamed up with the University of Michigan and the university's professors and Ford's engineers taught a 12-week course entitled "Cloud Computing in the Commute." The students developed their apps using a Ford Fiesta with a built-in touch screen running. The software platform for these projects was Windows 7 and Microsoft's Robotics Developer Studio. On the cloud side, the students used Microsoft's Windows Azure platform. [Emphasis added.]

Frederic continues with a link to a video featuring the Ford engineer who’s spearheading the project and additional details about the students’ project.

Soyatec has updated their Windows Azure SDK for Java (WindowsAzure4j) site with additional details about this open source effort to bridge Java/Eclipse developers to Windows Azure:

image

Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie asserts No, scalability may not be rocket science but it is computer science and not nearly as easy as it might appear as a preface to her What Goes Up Must Come Down post of 5/5/2010:

image

In what might be considered an ironic statement, scalability in cloud computing environments is as much about decreasing capacity as it is increasing capacity.

I know, puts my knickers in a twist, too.

The description of “scalability” associated with cloud computing in almost every definition that’s put forth1, however, clearly indicates the need for elastic scalability and it is that modifier that makes all the difference in the world.

See, in the past we’ve just been concerned with managing growth, with addressing the need to increase capacity to match an increase in usage. It may have been slow and steady or explosive and instantaneous, but it was always about an increase in usage. We never really considered how to deal with a decrease and we certainly didn’t take away capacity once we’d allocated it.

Cloud computing, however, does assume that the latter is something we will – and should – do.

Cloud isn’t just about “pay for what you use” it’s also about “use only what you need” and thus transitive logic tells us2 it should be “pay for what you need”. It’s about efficiency of utilization as much as it is costs. And efficiency means use what you need, when you need it, but no more. Do not tie up resources when they aren’t needed, let someone else use them. It’s about the allocation of capacity in a way that makes sense, without waste.

Lori continues with “THE APPLICATION LIMBO” topic and these two footnotes:

1 If they don’t include elastic or rapid scalability in their definition of cloud then someone is trying to sell you something akin to rack space in their Okefenokee Data Center
2 Assume pay=a and use=b and need=c, then a=b and b=c therefore a=c

Microsoft Pinpoint describes Four Ways Your Business Can Benefit from Windows Azure in this 5/5/2010 article for the PinPoint site:

For many companies, the best strategy to improve the efficiency, reduce the costs, and enhance the security of their IT infrastructure is cloud computing. The Windows Azure platform provides a foundation for running Windows applications and storing data in an Internet-accessible data center (i.e. “the cloud”) instead of at your site. Here are the top four ways it can help your business succeed.

  1. Improve Your Bottom Line …
  2. Free Your Focus …
  3. Easily Scale IT Services …
  4. Open New Markets …

Of course, the author adds some detail to the four ways.

J. Bonasia claims Software Forecast Becoming Steady: Many More Clouds in this 5/4/2010 post to Investors.com:

Pretty much every day comes more proof that a cloud has descended over the software sector, bringing a storm of change.

On Tuesday, Microsoft (MSFT) executives extolled the benefits of cloud computing at a show in Taiwan, emphasizing CEO Steve Ballmer's recent declaration that his firm "is all in."

On Monday, IBM (IBM) bought a company to bolster its cloud portfolio.

John Kalkman, a Microsoft vice president, spoke about <br />the potential of cloud computing Tuesday at a company event in Taiwan that backed up CEO Steve...

John Kalkman, a Microsoft vice president, spoke about the potential of cloud computing Tuesday at a company event in Taiwan that backed up CEO Steve... View Enlarged Image

Cloud computing transforms the delivery of technology. It lets users access software and computer resources over the Web — or in the cloud — rather than installing and managing their own systems.

A handful of startups led the early charge into cloud computing over the past decade. They became known as software-as-a-service vendors. A decade later, SaaS firms have largely been vindicated for their belief in the cloud.

More than 95% of companies plan to maintain or increase their use of SaaS systems, according to a survey released by Gartner on April 29.

The largest pre-cloud software companies — including SAP (SAP), Oracle (ORCL), Microsoft and CA (CA) — have been forced to get active in the cloud. It's a situation of the big guys trying to catch the smaller guys in a competitive field that, for now, offers room for growth.

IBM says global revenue from cloud computing will jump to $126 billion this year from $47 billion in 2008. On Monday, IBM said it would buy privately held Cast Iron Systems. The company makes middleware that will help IBM integrate cloud-based software systems with on-premise software. Cast Iron's platform integrates applications from firms such as Salesforce.com (CRM).

Bonasia continues with background on Marc Benioff, social networks, Lawson Software and NetSuite.

<Return to section navigation list> 

Cloud Security and Governance

Dave Kearns recommends “Be aware of security 'experts' vs. salesmen when it comes to cloud security solutions” in his Security truths get lost in the cloud post of 5/5/2010 to NetworkWorld’s Security blog:

There was an awkward moment during the keynote address for The Experts Conference last week. Conrad Bayer, Microsoft's general manager for identity and access, was addressing the delegates of the Directory and Identity track and, while talking about enterprise and organization movement to cloud computing, put up a slide that compared the migration to the cloud of small, midsize and large organizations.

One of the bullet points read "Small business have less concern for privacy and security."

What???

I questioned Bayer about the slide and he admitted that it was, perhaps, an inelegant way of making the point. As Bayer later explained, the typical small business is only moving one application or service to the cloud so there's less opportunity for inadvertent cross-compromising of both corporate data and personal data. He also noted that the people running small businesses don't read the technology press as much as their counterparts at larger organizations, so they're less exposed to the scare stories about the supposed dangers of cloud computing. …

Dave continues his analysis and concludes:

The cloud, just like your enterprise, is as secure (or not) as we make it. No more, and certainly no less.

EldoS Corporation announced Security for Silverlight and Cloud Services in New Release of SecureBlackbox on 5/5/2010:

EldoS Corporation announced today new major software release of its SecureBlackbox product that opens new horizons to Windows and .NET software developers. New offerings such as secure operations with cloud storages and office document security will help developers create innovative software products and significantly improve existing software solutions. New functionality for counteracting DNS spoofing using DNSSEC standard will be welcomed by network security professionals.

Remote data storages, also known as cloud storages, become increasingly popular among small and large businesses alike. While there are plenty of storage providers on the market, their offerings are generally limited to sale of storage capacities and restricted by custom access protocols. Corporate and home end-users face the problem of ensuring secure data storage and data transfer to and from the clouds. Tо help them, EldoS introduces a new component of SecureBlackbox – CloudBlackbox. At the moment CloudBlackbox offers methods for accessing Amazon S3 and Microsoft Azure storages – the leaders in cloud storage business (more services to be supported soon). CloudBlackbox offers the market, in addition to access features, data encryption and integrity control and, therefore, allows simple yet secure transfer and storage of data located outside of home or corporate networks.

The press release continues with more details about SecureBlackbox and CloudBlackbox.

<Return to section navigation list> 

Cloud Computing Events

Alessandro Teglia reported on 5/5/2010 that Microsoft NT Konferenca 2010 starts its engines ! in Portorož, Slovenia on 5/24 through 5/27/2010:

After the big success of the 2009 one (one of the biggest IT Conference ever held in Slovenia), at the end of the month, specifically from the 24th to the 27th of May,

the NT Konferenca 2010 is due to happen !

NTKonferenca 2010

In the beautiful venue of Portoroz, the Conference will touch a lot of different themes, covering from Microsoft Office 2010, Visual Studio 2010, SharePoint Server 2010 and many other topics, allowing all the different audiences (consumers, Information Workers, ITPros and Developers) to get an insight of all the brand new Microsoft products and technologies

The central theme of this year's NT conference, Microsoft Office 2010 and other solutions, services, products and technologies that enable access to information and data anywhere, anytime.”

Within three days the Conference will host 132 lectures, 12 workshops and 2 classrooms for "hands on labs, a specific theme area with the "community" and "ASK the Expert" lectures, thematic seminars, all of which on different technical levels

Featuring some of the most brilliant technical speakers and lecturers, including almost ALL the Slovenian MVPs !

group

(Anywhere. Anytime.) –> so Cloud included as well [Emphasis added.]

BTW: follow the news if you’re interested !

Eric Nelson wrote self-paced interactive training (see below) is full but there is Free Windows Azure training in Reading, UK on the 25th of May for partners in this 5/5/2010 post:

The 6 weeks of Windows Azure training is full (500 registration in around a week) but it turns out we have a few places free on the 25th if you can make it to Reading. 14 places when I last checked (today, 5th May).

Register now if you can make it.

Workshop Outline

  • Module 1: Windows Azure Platform overview
  • Module 2: Introduction to Windows Azure
  • Module 3: Building services using Windows Azure
  • Module 4: Windows Azure storage
  • Module 5: Building applications using SQL Azure
  • Module 6: Introduction to .NET Services
  • Module 7: Building applications using the .NET Service Bus

Eric Nelson reminds developers to Register now for the FREE UK Windows Azure Self-paced Interactive Learning Course starting May 10th:

image

We (David Gristwood and I) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-)

What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio.

Check out the full details of the event or go straight to registration.

image

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Ignacio M. Llorente asserts “The authors of the widely used OpenNebula toolkit form start-up” in his OpenNebula Cloud Toolkit Goes Commercial post of 5/5/2010:

The authors of the widely used OpenNebula toolkit have founded a company to provide value-added enterprise-solutions around this leading open source technology for cloud computing. C12G Labs has been created to address the growing demand for commercial support and services around OpenNebula.

"Our experience is that one single cloud solution does not fit all the requirements and constraints from any data center. We provide our partners with technology and services to build their custom cloud solution, product or service", said Ignacio M. Llorente, co-lead of the OpenNebula open-source project and Chief Executive Advisor of C12G Labs. "We are very excited with this new venture that will contribute to the future sustainability of OpenNebula. This open-source cloud-enabling technology will continue being distributed under Apache license and matured through a vibrant community. C12G has a strong commitment with OpenNebula and will contribute back to the community repository".

Cloud management solutions, like OpenNebula, are key components in any cloud architecture, being responsible for the secure, efficient and scalable management of the cloud resources. C12G builds custom Cloud solutions by adapting an Enterprise Edition of OpenNebula to meet the performance, integration and configuration requirements of infrastructure, processes or use cases of partners and customers.

"OpenNebula is the result of many years of research and the interaction with some of the major players in the Cloud arena. From the beginning, OpenNebula has been designed to be flexible enough to adapt to any infrastructure and to scale to thousands of virtual machines and cores" said Ruben S. Montero, co-lead of the OpenNebula open-source project and Chief Technology Advisor of C12G Labs. "We are convinced that OpenNebula will be one of the key technologies needed to build next generation Cloud infrastructures".

The first version of the OpenNebula Enterprise Edition will be available in few days to customers and partners with an active support subscription.

RightScale offers a collection of typical EC2 Site Architecture Diagrams in this fully illustrated support page:

The following diagrams will show some of the common site architectures in the Cloud.  Depending on your computer resource requirements and budget, RightScale provides you with the flexibility to create a custom architecture that provides the necessary performance and failover redundancy necessary to run your site in the Cloud.  Several of the most common architectures are described below.   Use one of the setups below as a model or easily customize a setup for your own purposes.

Following are the first two diagrams of the series:

All-in-one Single Server Setup

Use one of the "All-in-one" ServerTemplates, such as the LAMP (Linux, Apache, MySQL, PHP) ServerTemplate to launch a single server that contains Apache, as well as your application and database.

all-in-one_diagram.gif

Basic 4-Server Setup with EBS

This is the most common architecture on the cloud.  Each front-end server acts as a load balancer and application server.  You also have a master and slave database for redundancy and failover.  Backups of your database are saved as EBS Snapshots.

basic_4_diagram.gif

David Linthicum claims “IBM makes another enterprise middleware acquisition to fill an enterprise hole -- expect more to come” in a preface to his What IBM's purchase of Cast Iron means post of 5/4/2010 to InfoWorld’s Cloud Computing blog:

I was not surprised by the announcement Monday that IBM was purchasing Cast Iron, a longtime integration appliance and on-demand integration provider. Actually, I was going to have some guys from Cast Iron on my podcast Friday, but they politely delayed. Now I know why.

I see the reason behind the purchase. Cast Iron has been providing an application integration appliance since earlier this decade before it began to focus on the emerging SaaS space, offering one of the first out-of-the-box integration solutions for Salesforce.com and Oracle CRM. After some leadership changes, it recently moved into the integration-on-demand space, dispensing core integration services out of the cloud. …

The acquisition of Cast Iron fills some holes that IBM has had in its integration stack, and IBM loves to buy rather than build sought-after features and functions. Indeed, IBM's software division has made 55-plus acquisitions since 2003, and I suspect that Cast Iron is going to be one of a few more that occur this summer -- perhaps including another middleware vendor.

Cast Iron is really a second-generation application integration technology vendor following the likes of Saga Software (yours truly was the CTO), WebMethods (now a part of Software AG), SeeBeyond (now a part of Sun, which is now a part of Oracle), and Mercator (now part of IBM; yours truly again has been the CTO). Cast Iron was trying to present a much simplified integration engine delivered as an appliance, thus providing a "drop and go"-type deployment. While not perfect at first, continual refinement led to better integration technology and to localization for specific problem domains such as Salesforce.com-to-enterprise integration, out of the box. …

<Return to section navigation list> 

blog comments powered by Disqus