Thursday, December 29, 2011

Windows Azure and Cloud Computing Posts for 12/28/2011+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Avkash Chauhan (@avkashchauhan) started a Hadoop series with an Apache Hadoop on Windows Azure Part 1- Creating a new Windows Azure Cluster for Hadoop Job post on 12/28/2011:

imageOnce you have applied for Apache Hadoop on Windows Azure CTP account, you can create a new cluster using this information. If you want to learn more about Hadoop on Azure CTP, visit my previous blog here.

imageAfter you have got Hadoop on Azure CTP access, use Windows Live Account to Login at http://www.hadooponazure.com

imageNow you would need to enter the following info:

  • Step1: Enter the DNS Name
  • Step 2: Select Cluster Size
  • Step3: Enter Username and password for cluster login settings
  • Step4: Request Cluster

Once above information is submitted the cluster and nodes creation starts as below:

Because I have chosen small cluster size which includes 4 clusters so there will be total 5 nodes (4 worker nodes and 1 head node) . The node creation status will be shows in multiple screens as bellow:

and more status...

Finally the cluster will be ready to create new Hadoop Jobs as below:

Keywords: Azure, Hadoop, Apache, BigData, Cloud, MapReduce

Just received my Apache Hadoop on Windows Azure invitation yesterday and went through this process without a hitch.


Avkash Chauhan (@avkashchauhan) continued his Hadoop series with Apache Hadoop on Windows Azure Part 2 - Creating a Pi Estimator Hadoop Job on 12/28/2011:

imageOnce you have created a cluster in Windows Azure, you will have a few prebuilt samples provided in your account so let’s select “Samples” as below:

imageIn the Hadoop Samples gallery lets select “Pi Estimator” sample below:

imageYou will see “Pi Estimator” sample details as below. After reading the details and descriptions, you can go ahead and deploy the job to your cluster just in a single click as below:

A new Job windows will open where you can add and verify parameters used with our Hadoop Job. Below you can verify the parameters and then when ready just “Execute Job”:

Now the Hadoop job will start and notification will be shown as below:

Finally when the Job will be completed you will see the final results as below:

Pi Example

•••••

Job Info

Status: Completed Sucessfully
Type: jar
Start time: 12/29/2011 6:21:49 AM
End time: 12/29/2011 6:22:56 AM
Exit code: 0

Command

call hadoop.cmd jar hadoop-examples-0.20.203.1-SNAPSHOT.jar pi 16 10000000

Output (stdout)

Number of Maps = 16
Samples per Map = 10000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
Job Finished in 63.639 seconds
Estimated value of Pi is 3.14159155000000000000

Errors (stderr)

11/12/29 06:21:53 INFO mapred.JobClient: Running job: job_201112290558_0001
11/12/29 06:21:54 INFO mapred.JobClient: map 0% reduce 0%
11/12/29 06:22:20 INFO mapred.JobClient: map 12% reduce 0%
11/12/29 06:22:23 INFO mapred.JobClient: map 50% reduce 0%
11/12/29 06:22:32 INFO mapred.JobClient: map 62% reduce 0%
11/12/29 06:22:35 INFO mapred.JobClient: map 100% reduce 0%
11/12/29 06:22:38 INFO mapred.JobClient: map 100% reduce 16%
11/12/29 06:22:44 INFO mapred.JobClient: map 100% reduce 100%
11/12/29 06:22:55 INFO mapred.JobClient: Job complete: job_201112290558_0001
11/12/29 06:22:55 INFO mapred.JobClient: Counters: 27
11/12/29 06:22:55 INFO mapred.JobClient: Job Counters
11/12/29 06:22:55 INFO mapred.JobClient: Launched reduce tasks=1
11/12/29 06:22:55 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=189402
11/12/29 06:22:55 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/12/29 06:22:55 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/12/29 06:22:55 INFO mapred.JobClient: Rack-local map tasks=1
11/12/29 06:22:55 INFO mapred.JobClient: Launched map tasks=16
11/12/29 06:22:55 INFO mapred.JobClient: Data-local map tasks=15
11/12/29 06:22:55 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=22906
11/12/29 06:22:55 INFO mapred.JobClient: File Input Format Counters
11/12/29 06:22:55 INFO mapred.JobClient: Bytes Read=1888
11/12/29 06:22:55 INFO mapred.JobClient: File Output Format Counters
11/12/29 06:22:55 INFO mapred.JobClient: Bytes Written=97
11/12/29 06:22:55 INFO mapred.JobClient: FileSystemCounters
11/12/29 06:22:55 INFO mapred.JobClient: FILE_BYTES_READ=2958
11/12/29 06:22:55 INFO mapred.JobClient: HDFS_BYTES_READ=3910
11/12/29 06:22:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=371261
11/12/29 06:22:55 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
11/12/29 06:22:55 INFO mapred.JobClient: Map-Reduce Framework
11/12/29 06:22:55 INFO mapred.JobClient: Map output materialized bytes=448
11/12/29 06:22:55 INFO mapred.JobClient: Map input records=16
11/12/29 06:22:55 INFO mapred.JobClient: Reduce shuffle bytes=448
11/12/29 06:22:55 INFO mapred.JobClient: Spilled Records=64
11/12/29 06:22:55 INFO mapred.JobClient: Map output bytes=288
11/12/29 06:22:55 INFO mapred.JobClient: Map input bytes=384
11/12/29 06:22:55 INFO mapred.JobClient: Combine input records=0
11/12/29 06:22:55 INFO mapred.JobClient: SPLIT_RAW_BYTES=2022
11/12/29 06:22:55 INFO mapred.JobClient: Reduce input records=32
11/12/29 06:22:55 INFO mapred.JobClient: Reduce input groups=32
11/12/29 06:22:55 INFO mapred.JobClient: Combine output records=0
11/12/29 06:22:55 INFO mapred.JobClient: Reduce output records=0
11/12/29 06:22:55 INFO mapred.JobClient: Map output records=32

Finally you can use the Arrow button to go back and you will see your final Job count and history is listed as below:


Avkash Chauhan (@avkashchauhan) completed his Hadoop series with Apache Hadoop on Windows Azure Part 3 - Creating a Word Count Hadoop Job with a few twists on 12/28/2011:

imageIn this example I am starting a new Hadoop Job with few intentional errors to understand the processing better. You can go to Samples and deploy the Wordcount sample job to your cluster. Verify all the parameters and then you can start the job as below:

imageNote: There are to error in above steps:

  1. Intentionallysely I haven’t uploaded the davinci.txt file to the cluster yet
  2. I have given the wrong parameter

Once the Job will start very soon you will hit this error:

imageAs you can see above the class name was wrong which resulted into an error. Now you can change the correct parameter name as “wordcount” and restart the job.

Now you will hit another error as below:

To solve this problem lets upload the txt file name davinci.txt to cluster. (Please see the wordcount sample page for more info about this step)

To upload the file, we will launch Interactive JavaScript console as below:

When Interactive JavaScript console is open you can use fs.put() command to select the txt file from local machine and upload to desired folder at HDFS file system in cluster.

Once file upload is completed you will get the result message:

Let’s run the job again and now you will see the expected results as below. To solve this problem you just need to pass the second parameter with new output directory name.

WordCount Example

Job Info

Status: Completed Successfully
Type: jar
Start time: 12/29/2011 5:33:00 PM
End time: 12/29/2011 5:33:58 PM
Exit code: 0

Command

call hadoop.cmd jar hadoop-examples-0.20.203.1-SNAPSHOT.jar wordcount /example/data/davinci.txt DaVinciAllTopWords

Output (stdout)

Errors (stderr)

11/12/29 17:33:02 INFO input.FileInputFormat: Total input paths to process : 1
11/12/29 17:33:03 INFO mapred.JobClient: Running job: job_201112290558_0003
11/12/29 17:33:04 INFO mapred.JobClient: map 0% reduce 0%
11/12/29 17:33:29 INFO mapred.JobClient: map 100% reduce 0%
11/12/29 17:33:47 INFO mapred.JobClient: map 100% reduce 100%
11/12/29 17:33:58 INFO mapred.JobClient: Job complete: job_201112290558_0003
11/12/29 17:33:58 INFO mapred.JobClient: Counters: 25
11/12/29 17:33:58 INFO mapred.JobClient: Job Counters
11/12/29 17:33:58 INFO mapred.JobClient: Launched reduce tasks=1
11/12/29 17:33:58 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=29185
11/12/29 17:33:58 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/12/29 17:33:58 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/12/29 17:33:58 INFO mapred.JobClient: Rack-local map tasks=1
11/12/29 17:33:58 INFO mapred.JobClient: Launched map tasks=1
11/12/29 17:33:58 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=15671
11/12/29 17:33:58 INFO mapred.JobClient: File Output Format Counters
11/12/29 17:33:58 INFO mapred.JobClient: Bytes Written=337623
11/12/29 17:33:58 INFO mapred.JobClient: FileSystemCounters
11/12/29 17:33:58 INFO mapred.JobClient: FILE_BYTES_READ=467151
11/12/29 17:33:58 INFO mapred.JobClient: HDFS_BYTES_READ=1427899
11/12/29 17:33:58 INFO mapred.JobClient: FILE_BYTES_WRITTEN=977063
11/12/29 17:33:58 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=337623
11/12/29 17:33:58 INFO mapred.JobClient: File Input Format Counters
11/12/29 17:33:58 INFO mapred.JobClient: Bytes Read=1427785
11/12/29 17:33:58 INFO mapred.JobClient: Map-Reduce Framework
11/12/29 17:33:58 INFO mapred.JobClient: Reduce input groups=32956
11/12/29 17:33:58 INFO mapred.JobClient: Map output materialized bytes=466761
11/12/29 17:33:58 INFO mapred.JobClient: Combine output records=32956
11/12/29 17:33:58 INFO mapred.JobClient: Map input records=32118
11/12/29 17:33:58 INFO mapred.JobClient: Reduce shuffle bytes=466761
11/12/29 17:33:58 INFO mapred.JobClient: Reduce output records=32956
11/12/29 17:33:58 INFO mapred.JobClient: Spilled Records=65912
11/12/29 17:33:58 INFO mapred.JobClient: Map output bytes=2387798
11/12/29 17:33:58 INFO mapred.JobClient: Combine input records=251357
11/12/29 17:33:58 INFO mapred.JobClient: Map output records=251357
11/12/29 17:33:58 INFO mapred.JobClient: SPLIT_RAW_BYTES=114
11/12/29 17:33:58 INFO mapred.JobClient: Reduce input records=32956

If you run the same Job again you will see the following results:

WordCount Example

•••••

Job Info

Status: Failed
Type: jar
Start time: 12/29/2011 5:46:11 PM
End time: 12/29/2011 5:46:13 PM
Exit code: -1

Command

call hadoop.cmd jar hadoop-examples-0.20.203.1-SNAPSHOT.jar wordcount /example/data/davinci.txt DaVinciAllTopWords

Output (stdout)
Errors (stderr)

org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory DaVinciAllTopWords already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:134)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:830)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:791)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:791)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:494)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


<Return to section navigation list>

SQL Azure Database and Reporting

Bob Beauchemin (@bobbeauch) wrote Using Data Tier Applications to Move and Manage SQL Azure Databases for SQL Server Pro magazine’s January 2012 issue:

image

imageRead the entire article here.

 


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

My (@rogerjenn) Problems with Microsoft Codename “Data Explorer” - Aggregate Values and Merging Tables - Solved post of 12/29/2011 described solutions for emulating the Codename “Social Analytics” WinForm client’s summary data used to create graphs:

Update 12/29/2011: Miguel Llopis replied on 12/28/2011 regarding my answer in the forum to Alejandro Lopez Lago about issues with the Merge action:

imagePlease try using the "Optional Prefix" field in the Merge builder for either one of the two tables and let us know if that fixes the issue. We are currently working, as we speak, to improve usability of this Merge builder so any feedback is really welcome.

Adding a prefix with any field selected in either list box creates a usable table that can be fixed up by further modification. This is a cumbersome workaround that needs fixing (see steps 5 through 10 at the end of this post).


Update 12/28/2011: Alejandro Lopez Lago replied as follows to my thread about this problem in the Microsoft codename Data Explorer forum:

The error with List.Average occurs because it is trying to average a resource or task named "ToneReliability" instead of the column. To fix this, put square brackets around the name.

Steps 2 and 3 below have been updated to reflect Alejandro’s solution. This problem is solved, but the square-bracket requirement needs documentation.


imageMy downloadable Codename “Social Analytics” WinForms Client Sample App automatically generates a summary Users\UserName\AppData\Local\ContentItems.csv file while retrieving rows of the VancouverWindow8 dataset from the Windows Azure Marketplace DataMarket. The file is used to optionally recreate the graph when reopening the WinForms Sample App, as shown here:

image

The sample Microsoft Code “Data Explorer” mashup created in my Mashup Big Data with Microsoft Codename “Data Explorer” - An Illustrated Tutorial post of 12/27/2011 accurately replicates the layout of the WinForm sample’s DataGrid control:

image

by the ContentItems table snapshot in the Desktop client:

image

image

Here’s the the final result of the fixes for aggregate values and merging tables:

image

See the original post for a tutorial on merging and fixing up tables with Codename “Data Explorer.”


Hilary Stoupa produced a 00:18:03 Infopath 2010, OData and Cascading Filtering in Forms video and Channel9 posted it on 12/28/2011:

imageIn this video Microsoft MVP Hilary Stoupa, Qdabra Software, discusses how to efficiently incorpoarate cascading filtering in your InfoPath 2010 Forms when accessing OData services. It's a great technique that you'll certainly want to use!


Anant Jhingran (@jhingran) asserted “In the year to come, APIs will continue to transform into core business tools” in an introduction to his list of Six API predictions for 2012 of 12/19/2011 to O’Reilly Media’s Radar blog:

imageFor businesses, APIs are clearly evolving from a nice-to-have to a must-have. Externalization of back-end functionality so that apps can interact with systems, not just people, has become critical.

As we move into 2012, several API trends are emerging.

Enterprise APIs becoming mainstream

I see a lot of discussion about Facebook, Twitter and other public APIs. However, the excitement of these public APIs hides the real revolution. Namely, enterprises of all sizes are API-enabling their back-end systems. This opens up the aperture of the use of back-end systems, not just through apps built by the enterprise, but also through apps built by partners and independent developers.

For example, several large telecom enterprises, like AT&T, are embracing APIs because, even with their abundant resources, they cannot match what the world outside the enterprise can do for them — build apps that, in the end, bring in more business. Today, I estimate that 10% of enterprises are doing APIs, and another 10% are considering it. In 2012, I predict that these percentages are more likely to be 30% and 80%, respectively, reflecting the pace at which APIs are going mainstream.

API-centric architectures will be different from portal-centric or SOA-centric architectures

Websites (portals) are for people integration. Service-orientated architectures (SOA) are for app-to-app integration. While both websites and SOA use back-end systems through "internal" APIs, the new API world focuses on integration with apps and developers, not with people (via portals) or processes (via SOA). There are three specific things that are different:

  1. Enterprises need to think outside-in as opposed to inside-out. In an outside-in model, one would start with easy consumption (read REST) of perhaps "chatty" APIs and then improve upon them. This is in contrast to thinking performance first and ease of use second.
  2. Enterprises have to be comfortable handling unpredictable demand and rapidly changing usage patterns as opposed to the more predictable patterns in the enterprise software environment.
  3. Enterprises will need to make websites and even some internal processes clients of the "new" API layer instead of having them continue to use back-end systems directly. In this way, APIs will become the de facto and default way of accessing the back-end systems. Also, increasingly, the API layer will be delivered through a cloud model to handle the more rapid and evolving provisioning requirements.
Data-centric APIs increasingly common

Siri and WolframAlpha are great examples of data-centric APIs. There is a huge market for data, and today it is mostly made available through custom feeds (for example, Dun & Bradstreet) or through a sea of xls/csv files on a website (for example, Data.gov). The former is a highly paid model, and the latter is free-for-all model. Clearly, a new model will — and already is — emerging in the middle. This is the model in which data is brokered by APIs and free and freemium models will co-exist. Expect to see more examples of enterprises for which data is the primary business and where using the data through apps is the new business model.

imageThe first thing enterprises like this are doing is to API-enable their data. Now, RESTifying data is not easy, and there are as many schools of thought on how best to do it as there are data providers. However, I expect some combination of conventional and de facto standards, such as the Open Data Protocol (OData), to become increasingly common. I do not believe that the semantic web or the Resource Description Framework (RDF) model of data interchange is the answer. It goes against the grain of ease of use and adoption.

Many enterprises will implement APIs just to get analytics

imageA common theme in enterprise technologies is that a spend happens first in business automation and second in business optimization. The former enables bottom-line improvements; the latter enables top-line improvements. The API-adoption juggernaut is currently focused on business automation. However, as more and more traffic flows through the APIs, analytics on these APIs provides an increasingly better view of the performance of the enterprise, thereby benefiting IT and business optimizations. If this trend continues and if business optimization is the ultimate goal, a logical conclusion is that APIs become a means to the end for optimization. Therefore, all enterprises focused on business optimization must implement APIs so they have one "choke point" from which a lot of business optimization analytics can derive data.

APIs optimized for the mobile developer

Mobile apps are becoming recognized as the primary driver for API development and adoption. There are many different devices, and each has its own requirements. Most mobile apps have been developed for iPhone (iOS) and Android devices, but the next big trend is HTML5/JavaScript for apps that can run on any device.

Mobile devices in general need to receive less data in API responses and should not have to make repeated API calls to perform simple tasks. Inefficient APIs make things worse for the app developer and the API provider because problems are multiplied by mobile demand patterns (many small API requests) and concurrency (the sheer number of devices hitting the API at once). In 2012, many providers will realize they need to:

  • Let developers filter the size and content of the API response before it's returned to the app.
  • Give developers the right format for their app environment — plist for iOS and JSONP for HTML5/JavaScript.
OAuth 2.0 as the default security model

Apps are the new intermediaries in the digital world, enabling buyers and sellers to meet in ways that make the most sense. In the context of APIs, the buyer is the end-user and the seller is the API provider. Good apps are the ones that can package the provider's API in a great user experience that encourages the end user to participate. The growth of apps as intermediaries with valued services like Salesforce.com, Twitter, Facebook, eBay, and others requires a way for users to try the app for the first time without compromising their private data and privileges.

OAuth 2.0 makes it easy for end users to adopt new apps because they can test them out. If they don't like or don't trust an app, users can terminate the app's access to their account. In 2012, this will be the default choice for securing APIs that enable end-users to interact through apps with their valued services.

Microsoft’s Codename “Social Analytics” API is another example of a specialized API for (Twitter, Facebook and Stack Overflow) analytics.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

Kent Weare (@wearsy) posted SAP meet Azure Service Bus – EAI/EDI December 2011 CTP on 12/29/2011:

imageThe Azure Service Bus EAI/EDI December 2011 CTP has been out for about 2 weeks at the time of this blog post. As soon as I saw the Service Bus Connect feature in the documentation I wanted to try and hook up the Service Bus to SAP. The organization that I work for utilizes SAP to support many of its core business processes. We are also heavily invested in BizTalk Server when integrating SAP with other Corporate Systems. For the past 5 years much of my work experience has involved integration with SAP. So much that I had the opportunity to write a couple chapters on BizTalk-SAP integration in the Microsoft BizTalk 2010 Line of Business Systems Integration book.

imageIntegrating with SAP is of great interest to me both personally and professionally. I like the challenge of taking two different types of systems that would seemingly be impossible to integrate yet find a way to do it. I also enjoy the expression on SAP consultants face when you take a Microsoft product and successfully execute operations inside their system like creating customer records or creating Work Orders.

imageUsing the Service Bus Connect feature is not the only way of bridging your On-Premise Line of Business Systems with external parties via cloud based messaging technologies. Within the past year Microsoft also introduced a feature called BizTalk Server 2010 AppFabric Connect for Services. This tool allows for BizTalk to expose an endpoint via a Service Bus Relay. I have also used this mechanism to communicate with SAP via a Mobile Device and it does work.

There are a few differences between Service Bus Connect and AppFabric Connect for Services. Some of these differences include:

  • Any message transformations that need to take place actually take place in the Cloud instead of On Premise. When integrating with SAP, you never want to expose SAP schemas to calling clients. They are ugly to say the least. In this scenario we can expose a client friendly, or conical schema, and then transform this message into our SAP request in Azure.
  • AppFabric Connect for Services utilizes a full deployment of BizTalk in your environment where as Service Bus Connect only requires the BizTalk Adapter Pack when communicating with SAP. All message transformations and orchestration takes place On Premise and the cloud (Azure Service Bus) is basically used as a communication relay.

When connecting to On-Premise Line of Business Systems, both methods require the BizTalk Adapter Pack to be installed On-Premise. The BizTalk Adapter Pack is included in your BizTalk license. Licensing details for Service Bus Connect have not been released at the time of this writing.

The following walkthrough assumes you have some experience with the new Service Bus CTP. If you haven’t looked at the CTP before I suggest that you visit a few of the following links to get more familiar with the tool:

Also it is worth pointing out another blog post written by Steef-Jans Wiggers where he discusses Oracle integration with Service Bus Connect.

Building our Application

  • The first thing we need to do is to create a new ServiceBus – Enterprise Application Integration project. In my case I am calling it HelloSAP.

image

  • Since we know that we want to communicate with an On-Premise LOB system like SAP we need to Add a ServiceBus Connect Server. We can do this by accessing Server Explorer, right mouse clicking on ServiceBus Connect Servers and then selecting Add Server. When prompted we can provide a host name of localhost since this is a local environment.

image

  • We now can expand our Service Bus Connect Servers hierarchy. Since we want to build an SAP interface we can right mouse click on SAP and select Add SAP Target

image

  • If you have ever used the BizTalk Adapter Pack before, you are now in familiar territory. This is (almost) the same wizard that we use to generate schemas when connecting to SAP systems via BizTalk. There is a subtle difference in the bottom left corner called Configure Target Path which we will discuss in a few moments. If you are unfamiliar with this screen you are going to need some help from your SAP BASIS Admin to provide you with the connection details required to connect to SAP. Also if you are interested in further understanding everything that is going on in this screen I recommend you pick up the BizTalk LOB book that I previously talked about as I discuss the different aspects of this wizard in great detail. (ok..no more shameless plugs)

image

  • We now want to select the type of interface that we want to interact with. For the purpose of this blog post I am going to select a custom IDOC that is used when submitting timesheets from our field personnel. In my case, the version of SAP that I am connecting to is 700 so that is why I am selecting the ZHR_CATS IDOC that corresponds to this version. Once again, if you are unsure you will need to speak to your BASIS Admin.

image

  • Notice how we cannot click the OK button after establishing a connection to SAP and selecting an IDOC? We now need to create a Target Path. Creating a Target Path will provide the Bridge from the Azure Service Bus into SAP. Click the Configure button to continue.

image

  • Assuming that we have not been through this exercise before we need to select Add New LobRelay from the Select LOB Relay to host the LOB Target: dropdown list.

image

  • Another dialog box will appear. Within this dialog box we need to provide our CTP Labs namespace, a Relay path, Issuer name and key. For Relay path:, we can really provide whatever we want here. It will essentially make up the latter portion of URI for the Endpoint that is about to be created.

image

  • Now we are are prompted to Enter LOB Target sub-path. Once again this value can be whatever we want to choose. Since the HR Timesheet module inside of SAP is often called CATS I will go ahead and use this value here.

image

  • Now with our Target Path configured we are able to select the OK button to proceed.

image

  • Inside Server Explorer we now have an entry underneath SAP. This represents our End Point that will bridge requests coming from the cloud to SAP.

image

  • At this point we haven’t added any artifacts to our Enterprise Application Integration project that we created earlier. This is about to change. We need to right mouse click on our SAP endpoint and then select Add schemas to HelloSAP

image

  • We will now get prompted for some additional information in order to re-establish a connection to SAP so that we can generate Schemas that will enable us to send a message to SAP in a format that it is expecting. You may also notice that we aren’t being prompted for any SAP server information. In the Properties grid you will notice that this information is already populated because we had previously specified it when using the Consume Adapter Service Wizard.

image

  • Inside our solution, we will now discover that we have our SAP schemas in a folder called LOB Schemas.

image

  • For the purpose of this blog post, I have created another folder called Schemas and saved a Custom Schema called CloudRequest.xsd here. This is the message that our MessageSender application will be sending in once we test our solution. (BTW: I do find that the Schema editor that is included in BizTalk is much more intuitive and user friendly than this one. I am not a big fan of this one)

image

  • We now need to create a Map, or Transform, to convert our request message into a request that SAP will understand.

image

  • Next we need to add a Bridge on to the surface of our Bridge Configuration. Our Bridge will be responsible for executing our Map that we just created and then our message will get routed to our On-Premise end point so that our Timesheet can be sent to SAP.

image

  • We now need to set the message type that we expect will enter the bridge. By double clicking on our TimeSheetBridge we can then use the Message Type picker to select our Custom message type: CloudRequest.

image

  • Once we have selected our message type, we can then select a transform by clicking on the Transform Xml Transform box and then selecting our map from the Maps Collection.

image

  • Before we drag our LOB Connection shape onto our canvas we need to set our Service Namespace. This is the value that we created when we signed up for the CTP in the Azure Portal. To set the Service Namespace we need to click on any open space, in the Bridge Configuration canvas, and then look in the Properties Page. Place your Service Namespace here.

image

  • We are now at the point where we need to wire up our XML One-Way Bridge to our On-Premise LOB system. In order to do so we we need to drag our SAP instance onto the Bridge Configuration canvas.

image

  • Next, we need to drag a Connection shape onto the canvas to connect our Bridge to our LOB system.

image

  • The next action that needs to take place is setting up a Filter Condition between our LOB Shape and our Bridge. You can think of this like creating a subscription. If we wanted to filter messages by their content we would be able to do so here. Since we are interested in all messages we will just Match All. In order to set this property we need to select our Connection arrow then click on the Filter Condition ellipses.

image

  • If you have used the BizTalk Adapter Pack in the past you will be familiar with SOAP Action headers that need to be be set in your BizTalk Send Port. Since we don’t have Send Ports per say, we need to set this action in the Route Action as part of the One-Way Connection shape. In the Expression text box we want to put the name of our Operation which is http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ZHR_CATS//700/Send wrapped with single quotes ‘ ’. We can obtain this value by selecting our Service Bus Connect Server endpoint and then viewing the Properties page. Within the Properties page there is an Operations arrow that can be expanded on and we will find this value here. In the Destination (Write To) we want to set our Type to Soap and our Identifier to Action.

image

  • There is one last configuration that needs to take place before enable and deploy our service. We need to set our Security Type. We can do so by selecting our SAP – ServiceBus Connect Server instance from Server Explorer. Then in the Properties Page, click on the SecurityType ellipses. Determining which Security type to use will depend upon how your SAP instance ahs been configured. In my case, I am using ConfiguredUsername and I need to provide both a Username and Password.

image

  • With our configuration set, we can now enable our SAP – ServiceBus Connect Server instance by right mouse clicking on it and then selecting Enable.

image

  • We can now deploy our application to Azure by right mouse clicking on our Visual Studio solution and selecting Deploy.

image

Testing Application

  • In order to test our application we can use the MessageSender tool that is provided with the CTP Samples/Tools. It will simply allow us to submit EDI, or in this case XML, messages to an endpoint in the Azure Service Bus. In order to successfully submit these messages we need to provide our Service Namespace, Issuer Name, Shared Secret, Service Bus endpoint address, a path to our XML file that we want to submit and indicate that we are submitting an xml document. Once we have provide this information we can hit the enter key and provided we do not have errors we will see a Message sent successfully message.

Note: In the image below I have blocked my Shared Secret (in red) for privacy reasons.

image

  • If we launch our SAP GUI we should discover that it has received a message successfully.

image

  • We can then drill down into the message and discover the information that has been posted to our timesheet

image

Exceptions

While testing, I ran into an exception. In the CATSHOURS field I messed up the format of the field and sent in too much data. The BizTalk Adapter Pack/SAP Adapter validated this incoming data against the SAP schema that is being used to send messages to SAP. The result is a message was returned back to the MessageSender application. I thought that this was pretty interesting. Why? In my solution I am using a One-Way bridge and this exception is still being propagated to the calling application. Cool and very beneficial.

image

Conclusion

Overall the experience of using Service Bus connect was good. There were a few times I had to forget how we do this in BizTalk and think about how the Service Bus Connect does it. An example of this was the SOAP Action Headers that BizTalk developers are use to manipulating inside of Send Ports. I am not saying one way is better than the other but they are just different. Another example is the XML Schema editor. I find the BizTalk editor to be much more user friendly.

While I am not convinced that the current state of Service Bus Connect is ready for primetime (they have CTPs for a reason), I am very impressed that the Microsoft team could build this type of functionality into a CTP. For me personally, this type of functionality(connecting to On Premise LOB Systems) is a MUST HAVE as organizations start evolving towards Cloud computing.

Bravo, Wearsy! Great post.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Matthew Weinberger (@M_Wein) reported Microsoft Windows Azure PaaS Gets Node.js Update in a 12/29/2011 post to the TalkinCloud blog:

Microsoft’s announcement that the Node.js programming system had come to the Windows Azure platform-as-a-service (PaaS) cloud by way of an SDK preview made many developers very happy. So it’s probably going to come as good news to Node.js fans that Windows Azure snuck in an update to that SDK before 2011′s end.

imageWith this update to the Windows Azure SDK for Node.js, the community gets a PowerShell update with new cmdlets for “easily enabling remote desktop,” as well as SSL support, according to Microsoft’s official blog entry.

imageAlso new with this release, as per that same blog entry:

  • IISNode 0.1.13 update (including 2X performance improvements!)
  • Node.js 0.6.6 update from Joyent, including a more stable npm
  • Installer in-place upgrade support for all these components (so you can just click the WebPI link and it will upgrade your existing installation of these components if you already have the previous version, or install them all if you don’t)
  • Numerous bug fixes in all three components (see here, here and here)

imageOh, and I also recommend clicking through to the blog if you’re interested in a screenshot-based walk-through on how to enable remote desktop.

There’s been some channel chatter that Node.js may well be the next Ruby on Rails. In fact, Microsoft Windows Azure is an official sponsor of the upcoming inaugural Node Summit. I fully expect to hear more chatter around Node.js and how Azure partners fit in come 2012, so keep watching TalkinCloud.

Read More About This Topic

David Makogon (@dmakogon, pictured below) continued his series with Windows Azure ISV Blog Series: Digital Folio on 12/29/2011:

imageThe purpose of the Windows Azure ISV blog series is to highlight some of the accomplishments from the ISVs we’ve worked with during their Windows Azure application development and deployment. Today’s post, written by Windows Azure Architect Evangelist Ricardo Villalobos, is about how Digital Folio is using Windows Azure to deliver their online shopping service.

imageDigital Folio is an Internet browser plug-in that allows end users compare prices and find product suggestions while shopping online. The client portion of the solution is displayed as a sidebar or widget that is easily accessible while searching. Once the product has been found, the end-user can start comparing prices from different vendors or simply drag-and-drop the item into one of their “folios” to track latest prices, price history, and other information from retailers such as Amazon, BestBuy, Sears, Target, and Wal-Mart. Users can share their folios with friends, family, and sales staff, creating a rich social shopping experience.

imageAlthough Digital Folio’s practical and collaborative user interface is extremely impressive, it’s important to understand the role that Windows Azure architecture, infrastructure, and technology plays in supporting it.

Architecture

The Digital Folio browser plug-in was created using Silverlight, served from an IIS website running on a Windows Azure web role. Once installed on the client machine, it asynchronously communicates with a series of web services hosted on a second web role, using WCF as the Service layer. These services, in turn, talk to a Business / Data layer, which takes cares of concurrency and transaction management. Up to this point, this is a typical line-of-business application architecture, taking advantage of the clustered nature of Windows Azure to easily scale out and adapt to different levels of traffic. However, what makes this architecture special is the use of Windows Azure Storage tables to save all the information generated by the multiple users comparing and shopping products online.

Digital Folio first considered using SQL Azure as their primary storage mechanism, but quickly realized that the decision was not that simple, based on typical business drivers of consumer Internet applications like out-of-the-box scalability and capacity planning. Eventually, Digital Folio went with Windows Azure tables, but some functions and features - like reporting - were not as easy to implement when using table storage. The following section summarizes the lessons and best practices that they learned when working with Windows Azure tables.

Deciding between Windows Azure Storage tables and SQL Azure

When Digital Folio was making many of these decisions over one year ago, SQL Azure was still growing up. For instance, one could only purchase 1 GB - 10 GB database sizes, but today instances go up to 150 GB... and there were no supported options for SQL Azure sharding until recently, when SQL Azure Federation support went into production.

Digital Folio started by identifying the different characteristics that were relevant to their cloud architecture, and came up with the following:

  • Cost
  • Scalability
  • Performance
  • Reporting / Custom Views
  • Capacity

Cost Analysis

In terms of hard costs, using non-relational Windows Azure tables represented a significant reduction in operation costs, given that each Gigabyte is priced at $0.14 USD per month, plus $0.01 per 10,000 transactions (compared to an average price of $9.99 per Gigabyte per month for SQL Azure). However, the learning curve and figuring out best practices for table storage were certainly costs for the Digital Folio team at the time. Today, this soft cost should be lower given the amount of guidance and tooling available to support development efforts on Windows Azure tables.

Scalability Analysis

Given the partitioning of Windows Azure tables, the Digital Folio team was confident in the ability to easily scale tables to hundreds of millions of rows as long as a proper partitioning strategy was employed on each table. At the time, Digital Folio was concerned about SQL Azure’s size restrictions and lack of clear scalability targets. Today, with SQL Azure Federations, combined with increased database sizes, these issues are less of a concern, but structured SQL storage with ACID properties will tend to need more TLC to attain similar levels of scalability as out-of-the-box NoSQL approaches. With a careful partitioning strategy for each Windows Azure table, the 500 requests/sec/partition metrics that Microsoft has targeted would work just fine for the number of expected users.

Performance Analysis

The biggest performance differences between SQL Azure and Windows Azure tables depend on how many results are returned in a single query and how many indexes are required per entity. SQL Azure, generally, provides better performance for queries that return greater than 1,000 rows, since each Windows Azure table query is currently limited to returning only 1,000 results per query along with a continuation token that is used to get additional results. Keeping this in mind, a query that returns 2,500 results would require a single SQL Azure call, but three Windows Azure table storage requests. Since Digital Folio had a small number of entity types to persist to storage, with small numbers of rows returned per query, Windows Azure tables were a great fit.

The second major performance difference comes from tables that have more than one or two indexes. Since Windows Azure tables get scalability from partitioning every row by a single partition key per row, lookups outside the partition key are essentially full table scans (read “performance impact with large tables”). SQL Azure is obviously a more traditional database in that multiple indexes can be added to each table. This can be certainly overcome by creating tables that are essentially indexes into other tables, and in fact, Digital Folio has done this on a few occasions as the need arose. Most queries to Windows Azure table storage generally returned in 150ms given the careful partitioning strategy that was built out across the Azure tables by the Digital Folio team.

Reporting/Custom Views

Windows Azure table storage is generally a poor choice as a repository for full reporting given that only 1,000 rows are returned per query and then each additional 1,000 rows requires an extra call with the continuation token provided by the previous one. To that end, the Digital Folio team placed all analytics events in a separate SQL Azure database system so that traditional reporting can occur.

Capacity

Windows Azure tables can scale to 100 TB for table, blob, and queue storage per storage account, which will be plenty for most applications. Currently, SQL Azure goes up to 150 GB per database, with larger databases possible with the use of Federations.

Conclusion

Digital Folio considered different factors before choosing Windows Azure Tables as the storage mechanism for their cloud solution. The same process can be followed by companies with similar requirements, as they consider factors such as cost, learning curve, scalability, performance, reporting, and capacity.

Stay tuned for the next post in the Windows Azure ISV Blog Series and feel free to tell us what you think about the series by posting a comment below.We look forward to hearing from you!


Bruce Kyle announced Server Side JavaScript Programming Comes to Windows Azure with Node.js in a 12/28/2011 post to the US ISV Evangelism blog:

imageA holiday update Windows Azure SDK for Node.js can help get you started with server-side JavaScript on Windows Azure.

Node.js offers a server side JavaScript programming model ideal for building highly scalable and performant network applications whether on premise or in the cloud. One of its flagship qualities is that it leads you down a path of writing code that is using non-blocking IO thus achieving greater scale. Another is the fact that it is super small and lightweight. It has a very rich ecosystem of modules like express and socket.io which developers can pull in using the awesome node package manager otherwise known as npm. Thanks to the excellent partnership between Joyent and Microsoft we were able to port Node and NPM on Windows to enable a new class of applications.

imageHighlights of this preview:

  • Windows Azure PowerShell for Node.js 0.5.1 update, including new cmdlets for easily enabling Remote Desktop, as well as SSL support.
  • iisnode 0.1.13 update, including significant performance improvements (>2x throughput of the previous version)
  • Node.js 0.6.6 update from Joyent, including a more stable npm
  • Installer in-place upgrade support for all these components, so you can just click the WebPI link (linked from the dev center) and it will upgrade your existing installation of these components if you already have the previous version, or install them all if you don’t.
  • Numerous bug fixes in all three components

Learn more about Node.js and how to get started:

Get the December update.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Tim Anderson (@timanderson) posted ITWriting.com awards 2011: ten key happenings, from Nokia’s burning platform to HP’s nightmare year on 12/29/2011 which included the following:

10. Product that deserves better award: Microsoft LightSwitch

imageOn reflection maybe this award should go to Silverlight; but it is all part of the same story. Visual Studio LightSwitch, released in July 2011, is a model-driven development tool that generates Silverlight applications. It is nearly brilliant, and does a great job of making it relatively easy to construct business database applications, locally or on Windows Azure, complete with cross-platform Mac and Windows clients, and without having to write much code.

image_thumb1Several things are unfortunate though. First, usual version 1.0 problems like poor documentation and odd limitations. Second, it is Silverlight, when Microsoft has made it clear that its future focus is HTML 5. Third, it is Windows and (with limitations) Mac, at a time when something which addresses the growing interest in mobile devices would be a great deal more interesting. Typical Microsoft own-goal: Windows Phone 7 runs Silverlight, LightSwitch generates Silverlight, but no, your app will not run on Windows Phone 7. Last year I observed that Microsoft’s track-record on modelling in Visual Studio is to embrace in one release and extinguish in the next. History repeats?


Jan Van der Haegen (@janvanderhaegen) continued his MEF seriew with LightSwitch and the MEF story (part 2): using SubSystemLoaders to wire up the application on 12/28/2011:

imageLet’s be honest, no one likes to write documentation, including the LightSwitch team. This is one of the things about LightSwitch many find annoying, but I find so intriguing. Even after half a year of working with it, I still discover things in the LightSwitch framework that have 0 official documentation, 0 blog posts, 0 results on google, … Diving into the LightSwitch framework as deeply as I am, makes me feel like an explorer, Marco Polo, charting the uncharted…

image_thumb1In today’s post I’ll walk you through one of those uncharted territories and try to explain how LightSwitch uses SubSystemLoaders to set up the LightSwitch application, previously referred to as “How LightSwitch does composition“, …

In my first post about LightSwitch and the MEF story, I wrote that a LightSwitch application uses MEF “ to glue together different subsystems at runtime“. By the way, I did not randomly pick the term SubSystem, it’s actually a LightSwitch idea to load together different subsystems…

So lets find out how they work…

Client…
Step one: config.xml

When a LightSwitch application fires up, one of the first things it will do is locate a file called config.xml… This file is created/updated by the LightSwitch build process, and located inside your Silverlight XAP package. To view it, open windows explorer and navigate to your LightSwitch solution. Deeper down, in the subfolder bin/debug/web, you will find the built SilverLight application “MyLightSwitchApplication.Client.xap”. If you change the extension from XAP to ZIP, you can use your normal unzipping tool to open the compiled SilverLight application, and find the config.xml file inside.

Step two: Manifests

I won’t explain the ApplicationCulture, ManifestTimeStamp, ApplicationName, Version or RootProjectGuid nodes in this post, but jump right to the Manifests.

Each of these manifests refers to a xml file, which in turn contains one (or more, but usually just one) name of an assembly. For each of these assemblies, as identified by these manifests, the LightSwitch framework creates an AssemblyCatalog.

Step three: trying to locate SubSystemLoaders

In each of these catalogs, the LightSwitch framework will search for all exports of implementations of ISubSystemLoaders.

It is quite picky about what ISubSystemLoaders it will accept, and on how to organize them, as revealed by the export attribute and metadata used on one of those ISubSystemLoaders…

[Export(typeof(ISubsystemLoader)), Package("SubsystemLoader"), SubsystemName("Runtime.UserCode"), DependsOnSubsystem("Runtime.Presentation")]
Step four: deciding what SubSystemLoaders to accept.

Now that it found a list of ISubSystemLoader implementations, the LightSwitch framework reads the next node of our Config.xml file, which contains the names of the ISubSystemLoaders it needs to load. For each of them, it will call the Load method…

Some of them explain pretty well what they do just by looking at the name (ModelLoader, PresentationRuntimeLoader, ThemingRuntimeLoader, RuntimeShellLoader, UtilitiesLoader, DiagnosticsLoader), others are just pure mystery (yes that XML files does state there should be some “ReportingLoader”…).

Two of special interest are the RuntimeUserCodeLoader and the RuntimeExtensionLoader. They each load up the assemblies as identified by the ExtensionAssemblies and UserCodeAssemblies nodes…

Which obviously identify your code: your extensions, and your LightSwitch application…

Step 5: Initializing the application.

After the LightSwitch framework has correctly composed all subsystems, it is ready for use and will create a new instance of your LightSwitch application. The first method, called directly from the constructor of the application, is the “partial void Application_Initialize()” method.

Your LightSwitch application is loaded and initialized, time to navigate the main page of the defined shell (or to the LoginPage.xaml, if forms authentication is enabled) and enjoy the fruits of this complex process.

Who said true beauty lies in simplicity?

Server…
Step one: I have no idea.

Unfortunately, I haven’t even scratched the surface of the LightSwitch framework on the client-side, and thus haven’t even started to dig into the server-side.

However, opening the Web.config, those first appSettings smell strangely familiar all the sudden…

Conclusion…

The LightSwitch framework seems to use configuration (config.xml client side, and web.config server side), to locate ISubSystemLoader implementations, identify them, and ask them to load a part of that LightSwitch power for your application.

This configuration is generated each time you build your LightSwitch application, and I haven’t gone so far as to manipulate this during the build process, but I think the result of such is hardly difficult to guess.

Until then, the Application_Initialize method seems to be the earliest point in a LightSwitch application where one could execute any user code, such as registering additional ExportProviders with the LightSwitch MEF container


Jan Van der Haegen posted Quick tip: five things you should know about the IUserSettingsService in LightSwitch on 12/28/2011:

imageI opened my blog today to write two lengthy posts, when I noticed in the site statistics that someone is repeatedly hitting my blog because he/she is having some problems with the LightSwitch IUserSettingsService implementation… Whoever you are, this quick one is for you…

What is the UserSettingsService?

The UserSettingsService is an internal implementation of the Microsoft.LightSwitch.Runtime.Shell.View.IUserSettingsService interface. The name of the interface, and the namespace it’s in, already reveal what it’s purpose is: persisting and retrieving any kind of specific user settings, to use in any of your client side code: your LightSwitch application, a custom shell, a custom theme, user control, … You can see it in action in the Shell extension walktrough on MSDN.

image_thumb1How can I access the UserSettingsService?

To get a reference to the implementation:

  1. make sure you added a reference to the Microsoft.LightSwitch.SdkProxy.dll assembly. You can find the dll where your visual studio is installed. So in my case: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\\LightSwitch\1.0\Client\Microsoft.LightSwitch.SdkProxy.dll
  2. Ask the VSExportProviderService, a service that allows you to query the MEF container of LightSwitch, for an IServiceProxy implementation, as such: VsExportProviderService.GetExportedValue()
  3. The returned IServiceProxy has a property called UserSettingsService. Retrieve the value.
When should I save my settings?

The IUserSettingsService has three methods, and one event. The first two of the three methods are self-explanatory:

  • GetSetting retrieves the object/value/setting that was previously stored for a given name, or returns null if no value exists in its cache for that name.
  • ResetSetting clears the object/value/setting that was previously stored for a given name.

The third method however, is a bit tricky. The SetSetting method sets the value/object/setting for a given name (or adds a new entry) to its cache, but it doesn’t persist that value yet. The values are only persisted right after the IUserSettingsService fired it’s Closing event. This implies two things:

  • Since your values are only persisted right after this event, it’s considered best practice to subscribe to this event (ie: add an event handler), then call the SetSetting method for each of the key/value pairs you wish to save.
  • The Closing event originates from the application Closing event (this means: closing the browser/current page tab for in-brower applications, or hitting the red X at the top right of your out-of-browser application). If you kill the process of your LightSwitch application, the event will not fire and thus none of your settings will be persisted. This implies that when you use the Stop Debugging (Shift-F5) command from Visual Studio, none of your settings will be persisted.
Why won’t it save my settings?

Each time you start a LightSwitch in-browser application (ie: project>properties>application type>web), the application will have a different ID. Because this ID is used to locate the folder on your hard disk where the settings are persisted, the LightSwitch application will retrieve a different folder each time. This means that using the IUserSettingsService does not work for in-browser LightSwitch applications that are started from Visual Studio. Suggested solution: switch to a LightSwitch desktop application to test, when you deploy as a web app it will work just fine.

Where are my settings persisted?

According to Bill R, they are stored in your MyDocuments folder for a LightSwitch desktop app, and in the SilverLight Isolated Storage for a LightSwitch web app.

As I described earlier, this implies that the “User” in “IUserSettingsService” is referring to the Windows User, not, as one might suspect, the user currently logged in to your LightSwitch application.

This also implies that because the IUserSettingsService is using the SilverLight Isolated Storage for LightSwitch web apps, there is a maximum quota of 1MB per LightSwitch app to store your settings.

And a special bonus tip…

Only 45 minutes after posting, a bonus tip came in from John Kears, a fellow LightSwitch hacker. :-)

Moral of my story, is that you need to be certain that any type that you plan to save out as a usersetting, must be serializable else make it so.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Ed Scannell and Stuart Johnston asserted IT pros will take a harder look at cloud computing in 2012 in a 12/29/2011 article for SearchCloudComputing.com:

imageDespite slow adoption by enterprise IT in the years since cloud computing emerged, 2012 may turn out to be the year when cloud technologies finally begin to gain parity with more traditional data center staples such as virtualization and tape libraries.

TechTarget's 2012 IT Priorities Survey found that a growing number of enterprises -- some 24.1% -- plan to grow their expenditures for cloud services over the next year. In fact, 27% of respondents said that cloud computing initiatives were viewed with high importance at their companies. Another 53% rated the importance of their cloud projects as medium.

“We’ve done enough investing in infrastructure-level products and virtualization, along with exploring options for cloud strategies,” said Len Barney, a purchasing agent with a large transportation company in Jacksonville, Fla. “Next year is when we’ll move forward with implementing our first significant cloud, which will be a hybrid [cloud model]. It’s been a long time coming, but we’re there now.”

It’s just a matter of time before we roll out [private cloud], but whether we do it in 2012 depends on which way the economic winds blow.

Ned Johnson, IT manager with a Houston-based trucking company

While only 28.1% of the respondents said implementing a private cloud next year is a high priority, about 56.1% said it was a medium-level priority. Only 9% of respondents said implementing a private cloud was not on their radar screens in 2012. Some of those making private cloud implementations a medium priority next year said doing so hinges largely on the state of the economy. …

Read the entire article here.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com


Jim O’Neil described Windows Azure Trial Account Spending Limit in a 12/28/2011 post:

A couple of weeks ago, Microsoft announced a number of new features for Windows Azure including a revamped subscription process making it easier than ever to try out the platform. In pointing out the “risk-free” nature of the offers in a previous post, I was referring specifically to the spending limit feature introduced for newly provisioned trial and MSDN offers. With this post I’ll dig a bit further into the specifics of the spending limit and how your Windows Azure subscriptions (new and existing) are affected.

  1. What is the Spending Limit?
  2. How do I know that I've reached the spending limit?
  3. What happens when the spending limit is reached?
  4. How do I remove the spending limit?
  5. How do I cancel my account?
What Is the Spending Limit?

All 3-month trial subscriptions and MSDN subscriber benefits for Windows Azure that were provisioned after the launch of the new process (December 10th, 2011) have a default “spending limit” of $0. As a result, if you exceed any of the monthly allotments of complimentary services associated with your subscription (see the table below), you will not incur charges, but instead your subscription will be suspended until the next monthly billing cycle, at which point the usage meters are reset, and the account becomes available again.

There are a few nuances regarding the spending limit that may not be readily apparent:

  • You cannot set a specific spending limit; the limit is $0 which is set by default on all new 3-month trial and MSDN subscriber accounts. The $0 means that your account will be suspended before any amount is charged for exceeding the monthly complimentary allotments of services shown in the table below.
  • You can turn off the spending limit by opting into a "pay-as-you-go subscription" (cf., How do I remove the spending limit?), but you cannot undo this action.
  • If you do turn off the spending limit for your 3-month trial account, when the trial period ends you will be charged for any continued usage at the “pay-as-you-go” rates. If you leave the spending limit at the default setting of $0, your account will simply expire with no further action required on your part.
  • Trial and MSDN offers provisioned before the changes on December 10th are NOT covered by the spending limit, and charges will occur if you exceed the complimentary monthly services allotment.
Complimentary Monthly Windows Azure Service Allotments

image 

How Do I Know That I’ve Reached the Spending Limit?

When you log in to the Account Center for your Windows Azure account, you may see various notifications associated with your subscriptions. For instance, below is how my 3-month trial offer account appears a few days after provisioning it. Note there is one notification: Your Free Trial expires in 85 day(s). Would you like to upgrade now?

Subscriptions list in Account Center

When you get close to reaching your monthly allotment, you’ll see another notification as highlighted below. (This view shows the details view of the 3-Month Free Trial offer selected from the list of subscriptions shown in the previous screen shot.)

Account Center showing approach to spending limit

When you’ve reached the spending limit you see an updated notification indicating the subscription has been disabled to prevent charges, and the resource which has hit or exceeded the complimentary monthly allotment is highlighted. Note that below I’ve exceeded the compute allotment of 750 hours by over 200 hours! Clearly the allotments are not precisely enforced (attributing to the near-but-not-quite real-time nature of the account billing); however, also notice that the $24.48 that would normally be charged for the 200+ hour overage of compute time is waived and not part of the estimated bill.

Account Center showing spending limit reached

When you’ve reached the spending limit, you’ll also receive an e-mail from MSFT*Azure <billing@microsoft.com> informing you that the account has been disabled to prevent charges to your credit card and also giving instructions for disabling the spending limit so you can continue to use the account services.

E-mails are no longer sent when compute utilization reaches 75%, 100%, and 125% of the complimentary monthly allotment. If you are using an account provisioned before December 10th, 2011, you are responsible for charges exceeding the free allotment, so do check the Account Center periodically to avoid unexpected expenses.

What Happens When the Spending Limit Is Reached?

At the point the spending limit has been reached, your subscription will be disabled (see below) until the next monthly billing cycle (or if this is the third month of the 3-month trial offer, your account will expire automatically). When an account is suspended, all compute services are removed. Storage accounts remain intact, but attempts to access them result in a 403 Forbidden exception (see callout below).

Management Portal showing suspended account

Management Portal showing services in disabled account

According to the Windows Azure site:

When your usage exhausts the monthly amounts included in your offer, we will disable your service for the remainder of that billing month, which includes removing any hosted services that you may have deployed. The data in your storage accounts and databases will be accessible in a read-only manner. At the beginning of the next billing month, your subscription will be re-enabled and you can re-deploy your hosted service(s) and have full access to your storage accounts and databases.

In my experience, the entire subscription is disabled, and attempts to access storage – even in a read-only manner - are met with a 403 Forbidden error with a return code of AccountIsDisabled. When using the Service Explorer in Visual Studio to access your suspended storage account, you’ll be erroneously notified that the storage key is invalid. Using another tool, such as Cerebrata’s Cloud Storage Studio provides more visibility into the actual HTTP response message:

Request URI: https://trialstoragejim.blob.core.windows.net/?restype=container...
Response Headers:
   x-ms-request-id: d5968738-013a-4e45-8b10-9f6fe3c2d380
   Content-Length: 220
   Content-Type: application/xml
   Date: Mon, 26 Dec 2011 15:30:41 GMT
   Server: Microsoft-HTTPAPI/2.0
Error Details:
   Code: AccountIsDisabled
   Message: The specified account is disabled.
   RequestId:d5968738-013a-4e45-8b10-9f6fe3c2d380
   Time:2011-12-26T15:30:42.4474124Z
   Error Code: HTTP Status Code: (403)
  

How Do I Remove the Spending Limit?

You can remove the spending limit at any time by selecting one of the notifications in the Account Center. Removing the spending limit is tantamount to upgrading to a pay-as-you-go plan and is a permanent change (until you cancel your subscription).

Removing the spending limit

After upgrading the subscription, the alert message in the Account Center notes that the spending limit has been removed, and you’ll be charged for usage beyond the complimentary allotment provided in the subscription.

Account Center after spending limit removed

Note the charges of $33.66 to my account above! I removed spending limits after my account had been suspended due to exceeding my complimentary service allotment. The bill reflects the overages of my account before the suspension kicked-in (I had expended over 1000 hours of compute time before the 750 limit was triggered). I believe this amount should NOT be charged, but am seeking clarification on the process, and will update this post accordingly.

How Do I Cancel My Account?

Ok, so all good things come to an end, or perhaps you’ve convinced your company to go all in and you don’t need your individual account anymore? It’s pretty straightforward to cancel, just visit the Account Center, select the subscription you wish to cancel, and click the Cancel Subscription option on the right sidebar.

Option to cancel subscription

That leads to the confirmation screen, which duly expresses our sentiment at your decision!

Subscription cancellation screen

After confirming the cancellation, you’ll see the subscription now listed in the Cancelled section of your Account Center subscriptions page.

A cancelled Windows Azure subscription

Likewise, the Windows Azure Management Portal shows the account now disabled:

Cancelled subscription in the Management Portal


Wely Lau (@wely_live) reported “A Cloudy Place”– Blogging About Cloud Computing on 12/28/2011:

imageI am glad to share that “a cloudy place” blog is finally up here: http://acloudyplace.com,

What is “a cloudy place”?

imageA centralized blog focused on cloud technology exclusively for developers. If you’ve heard about SQL Server Central, it’s somewhat similar but focus on cloud computing. You can find topics such as general cloud info, Amazon Web Services, Windows Azure, and so many more.

Who own “a cloudy place”?

“a cloudy place” is owned and managed by Red Gate, a software company specializing in SQL, DBA, .NET, and Oracle development tools based in Cambridge, UK.

What to do with me?

imageAha! I am invited to become a contributor on “a cloudy place”. In a first few post, you will see me write about some generic cloud concept. Of course, I’ll discuss more about Windows Azure later on.

You can find my articles here.

Full disclosure: I’m also a contributor to A Cloudy Place. My first article (about OData) is here.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Kenon Owens (@MS_Int_Virt) posted Aggregating and Abstracting Resources to offer Private Clouds to the Microsoft Server and Cloud Platform Team blog on 11/28/2011:

imageOne of the biggest advantages of the Private Cloud is the ability to offer self-service to your Line of Business owners. That way they can deploy and manage their services more quickly and efficiently. To gain this ability, you need a way to take all of your diverse hardware fabrics and logically abstract them into a private cloud infrastructure. Also, once you have created these “clouds”, you need to delegate these resources to the correct individuals who need access. The Microsoft Private Cloud allows you to do this through the creation of clouds and delegating these resources to Lines of Business that need to use them in a controlled manner.

imageToday, when someone wants to deploy a service, you may have a request system where they fill out a form or send an email or something to ask for the service. After you get this request, usually, you will go look at your infrastructure you have and try to find a suitable host or cluster of hypervisors to put the VMs on. Then you would go install a bunch of VMs and their base OS. After that, you would usually log onto the machines and spend hours to days configuring the systems and then installing the application. What if there was a simpler way to give the requestor access to their resources?

Think of it this way:

1) You have compute, storage, and network fabrics made up of different components. You may have different architecture servers (Intel or AMD), you may have different types of storage (fibrechannel or iSCSI) from different vendors, and you have complex networking. You also may have different hypervisors (VMware vSphere, Citrix XenServer, and Microsoft Hyper-V) that you have to deal with.

clip_image002

2) You want to be able to abstract that heterogeneous architecture into something logical and standardized by grouping and sharing that diverse infrastructure.

clip_image004

clip_image006

3) Then, you will abstract these resources to create clouds

clip_image008

a. Notice that resources for the cloud can come from different datacenters. This flexibility gives you options in how you configure your cloud resources.

b. Also, note that you can have disparate hypervisor, compute, storage, and networking fabrics in the same cloud.

4) Once you have created these clouds, you can delegate these resources to different Lines of Business.

clip_image010

a. You can delegate both how much of the cloud resources these groups can use as well as the types of actions they can perform within their portion of the cloud

clip_image012

clip_image014

5) At this point your self-service users can deploy standardized services from templates you have created and offered to the self-service users.

clip_image016

Creating clouds this way and delegating access allow for better utilization and more efficient use of resources than what you may have done before. The reason for this is that, through delegation, we can allow self-service users to deploy services to the hardware, and we can intelligently choose where these services are deployed and remove a lot of the complexity for the self-service users.

If you want to get started creating clouds and abstracting your underlying fabric into the logical resources needed for clouds, please download Virtual Machine Manager 2012 RC or all of the Microsoft System Center 2012 Pre-Release Products. Or, to see how this fits in with what you may be doing join our Private Cloud Community Evaluation Program.

Kenon is Technical Product Manager, Management and Security for Microsoft


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

O’Reilly Media reported Strata 2012 —The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.


Save 20% on registration with the code RADAR20


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) reported New Features for Amazon SNS - Delivery Policies and Message Formatting in a 12/28/2011 post:

imageWe have added two new features to Amazon SNS to give you additional control over the content and delivery of your messages. As a brief reminder, SNS allows you to create named Topics, subscribe to Topics (with delivery via email, HTTP / HTTPS, an SMS message, or to an SQS queue), and to publish messages to Topics.

SNS Delivery Policies
imageThe SNS Delivery Policies give you options to control the delivery rate and error handling for each SNS endpoint. You can, for example, use a Delivery Policy to avoid overwhelming a particular endpoint with a sudden barrage of messages.

Delivery Policies can be set for Topics and for the endpoints associated with a particular Topic. Each Delivery Policy contains a Retry Policy and a Throttle Policy. With this release, the policies are effective for the HTTP and HTTPS Subscription types.

The Retry Policy can specify the following options:

  • minDelayTarget - Minimum delay for a retry.
  • maxDelayTarget - Maximum delay for a retry.
  • numNoDelayRetries - Number of retries to be done with no delay (as soon as possible).
  • numMinDelayRetries - Number of retries to be done at minDelayTarget intervals before initiating the backoff function.
  • numMaxDelayRetries - Number of retries to be done at maxTargetDelay during the backoff function.
  • backoffFunction - Model for backoff between retries: Linear, Exponential, or Arithmetic.

There are default, minimum, and maxium values for each option; see the SNS documentation for more information.

The Throttle Policy can specify one option:

  • maxReceivesPerSecond - Maximum number of delivery attempts per second per Subscription.

All attempts to deliver a message are based on an "effective Delivery Policy" which combines the default policy, any policy values set for the Topic, and any policy values set for the Subscription endpoint. Values left unspecified at the Subscription level will be inherited from the Topic's Delivery Policy and then from the default policy.

SNS Message Formatting
This feature gives you the ability to publish a message that contains content that is specific to each type of Subscription. You could, for example, send a short message to an SMS endpoint and a longer message to an email endpoint.

To use this feature, you set the new MessageStructure parameter to "json" when you call the SNS publish function. The associated message body must contain a JSON object with a default message body and optional message bodies for other protocols:

{
"default" : "Server busy.",
"email" : "Dear Jeff, your server is really busy and you should investigate. Best Regards, AWS.",
"sms" : "ALERT! Server Busy!!",
"https" : "ServerBusy"
}

The default entry will be used for any protocol (http and sqs in this case) that does not have an entry of its own.


David Linthicum (@DavidLinthicum) asserted “The consumer cloud, OpenStack, and more cloud outages all figure into our future” in a deck for his Cloud computing roars into 2012 post of 12/28/2011 to InfoWorld’s Cloud Computing blog:

imageI've already voiced my take on the key trends shaping the future of cloud computing. But what should you expect in the near term? Here's what I think will happen in cloud computing next year.

Rapid rise of data living in the cloud. I've called 2013 the year of data in the cloud, and I stand behind that. What about 2012? Consider the data problems faced by most enterprises, as well as the availability of newer technology such as Hadoop and cloud-based relational database systems. We will see this space explode next year, and the momentum will hit in 2013. …

imageThe rise of the "consumer cloud." Also called "retail clouds," offerings such as iCloud, Box.net, Dropbox, Office 365, and Google Apps will dominate much of the spending in 2012 as the at-home market finds that cloud computing is both convenient and cheap.

OpenStack becomes more confusing. It's an open source platform, it's a product, it's a service engagement. What is it? I have high hopes for the OpenStack technology next year. Numerous new products will be based upon OpenStack, and companies will embrace OpenStack as their cloud platform of choice. However, OpenStack will leave many people confused. By 2013, the technology will be better understood.

A few more big outages, but nobody will care. I predicted a few cloud outages this year, and they did indeed occur, with Amazon Web Services taking several well-publicized naps. However, cloud computing grew and continues to grow, with revenue hitting $1 billion. I suspect we'll have more big outages this coming year. The press will go nuts, but data and processes will remain in the cloud and the growth will proceed.


Matthew Weinberger (@M_Wein) posted Rackspace Cloud Adds OpenLogic CloudSwing PaaS to Lineup to the TalkinCloud blog on 12/28/2011:

imageOpenLogic CloudSwing, one of many recent entrants into the open source platform-as-a-service (PaaS) market, has announced its entry into the Rackspace Cloud Tools program. Rackspace Hosting’s customers and partners can now deploy CloudSwing from the Cloud Tools app showcase.

imageCloudSwing’s value proposition is one we’ve heard a lot recently: “A fully flexible Platform-as-a-Service (PaaS) cloud solution offering cost-tracking and complete customization of technology stacks,” according to OpenLogic’s press release, with the added benefit of compatibility with more than 600 open source packages.

imageAnd now, with this announcement, RackSpace Cloud gets pre-configured stack templates to quickly deploy a CloudSwing PaaS (though you can customize as needed, of course). Moreover, CloudSwing users can monitor their Rackspace Cloud applications from the CloudSwing Dashboard. Finally, CloudSwing enables users to track the usage costs for any RackSpace Cloud application deployed from the application, across all accounts.

“CloudSwing has more than tripled the number of users in the first 60 days. As enterprises look to build customized PaaS environments in the Rackspace Cloud, we anticipate this growth to continue,” said Kim Weins, senior vice president of marketing of OpenLogic in a prepared statement.

Those keeping tabs on TalkinCloud the last few months can probably guess what I’m going to say about this announcement. Good for Rackspace and OpenLogic alike, but just how sustainable is this market segment in the face of competition from fellow startups and major vendors including Microsoft and VMware?

Read More About This Topic

Jeff Barr (@jeffbarr) described Amazon S3 - Object Expiration on 12/27/2011:

Amazon S3 is a great way to store files for the short or for the long term.

imageIf you use S3 to store log files or other files that have a limited lifetime, you probably had to build some sort of mechanism in-house to track object ages and to initiate a bulk deletion process from time to time. Although our new Multi-Object deletion function will help you to make this process faster and easier, we want to go ever farther.

imageS3's new Object Expiration function allows you to define rules to schedule the removal of your objects after a pre-defined time period. The rules are specified in the Lifecycle Configuration policy that you apply to a bucket. You can update this policy through the S3 API or from the AWS Management Console.

Each rule has the following attributes:

  • Prefix - Initial part of the key name, (e.g. “logs/”), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
  • Status - Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
  • Expiration - Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object's creation date.
  • Id - Optional, gives a name to the rule.

You can define up to 100 expiration rules for each of your Amazon S3 buckets; however, the rules must specify distinct prefixes to avoid ambiguity. After an Object Expiration rule is added, the rule is applied to objects that already exist in the bucket as well as any new objects added to the bucket after the rule is created. We calculate the expiration date for an object by adding that object's creation time to the expiration period and rounding off the resulting time to midnight of that day. If you make a GET or a HEAD request on an object that has been scheduled for expiration, the response will include an x-amz-expiration header that includes this expiration date and the corresponding rule Id.

We evaluate the expiration rules once each day. During this time, based on their expiration dates, any object found to be expired will be queued for removal. You will not be billed for any associated storage for those objects on or after their expiration date. If server access logging has been enabled for that S3 bucket, an S3.EXPIRE.OBJECT record will be generated when an object is removed.

You can use the Object Expiration feature on buckets that are stored using Standard or Reduced Redundancy Storage. You cannot, however, use it in conjunction with S3 Versioning (this is, as they say, for your own protection). You will have to delete all expiration rules for the bucket before enabling versioning on that bucket.

Using Object Expiration rules to schedule periodic removal of objects can help you avoid having to implement processes to perform repetitive delete operations. We recommend that you use Object Expiration for performing recurring deletions that can be scheduled, and use Multi-Object Delete for efficient one-time deletions.

You can use this feature to expire objects that you create, or objects that AWS has created on your behalf, including S3 logs, CloudFront logs, and data created by AWS Import/Export.

For more information on the use of Object Expiration, please see the Object Expiration topic in the Amazon S3 Developer Guide.

This would obviously be a good feature for the Azure team to add to Azure blobs and, possibly, tables.


<Return to section navigation list>

0 comments: