Tuesday, March 26, 2013

Windows Azure and Cloud Computing Posts for 3/25/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

• Updated 3/26/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, HDInsight and Media Services

• Sebastian Burckhardt (pictured below), Alexey Gotsman and Hongseok Yang authored an Understanding Eventual Consistency technical report, which Microsoft Research published on 3/25/2013. From the Abstract:

imageModern geo-replicated databases underlying large-scale Internet services guarantee immediate availability and tolerate network partitions at the expense of providing only weak forms of consistency, commonly dubbed eventual consistency. At the moment there is a lot of confusion about the semantics of eventual consistency, as different systems implement it with different sets of features and in subtly different forms, stated either informally or using disparate and low-level formalisms.

imageWe address this problem by proposing a framework for formal and declarative specification of the semantics of eventually consistent systems using axioms. Our framework is fully customisable: by varying the set of axioms, we can rigorously define the semantics of systems that combine any subset of typical guarantees or features, including conflict resolution policies, session guarantees, causality guarantees, multiple consistency levels and transactions. We prove that our specifications are validated by an example abstract implementation, based
on algorithms used in real-world systems. These results demonstrate that our framework provides system architects with a tool for exploring the design space, and lays the foundation for formal reasoning about eventually consistent systems.

The topic of eventual consistency is of interest to users of georeplicated cloud data storage with geofailover for enhanced availability and durability, such as that offered since September 2011 at no cost for Windows Azure blobs and tables. It also applies to most NoSQL databases, such as Amazon’s DynamoDB, for transacted operations. From the technical report’s Introduction:

Modern large-scale Internet services rely on distributed database systems that maintain multiple replicas of data. Often such systems are geo-replicated, meaning that the replicas are located in geographically distinct locations. Geo-replication requires the systems to tolerate network partitions, yet end-user
applications also require them to provide immediate availability. Ideally, we would like to achieve these two requirements while also providing strong consistency, which roughly guarantees that the outcome of a set of concurrent requests to the
database is the same as what one can obtain by executing these requests atomically in some sequence. Unfortunately, the famous CAP theorem [18] shows that this is impossible. For this reason, modern geo-replicated systems provide weaker forms of consistency, commonly dubbed eventual consistency [32].

Here the word ‘eventual’ refers to the guarantee that if update requests stop arriving to the database, then it will eventually reach a consistent state. (1)

Geo-replication is a hot research area, and new architectures for eventually consistent systems appear every year [5, 14, 15, 17, 21, 24, 29, 30]. Unfortunately, whereas consistency models of classical relational databases have been well-studied [9, 26], those of geo-replicated systems are poorly understood. The very term eventual consistency is a catch-all buzzword, and different systems claiming to be eventually consistent actually provide subtly different guarantees and features. Commonly used ways of their specification are inadequate for several reasons:

  • Disparate and low-level formalisms. Specifications of consistency models proposed for various systems are stated informally or using disparate formalisms, often tied to system implementations. This makes it hard to compare guarantees provided by different systems or apply ideas from one
    of them in another.
  • Weak guarantees. More declarative attempts to formalise eventual consistency [29] have identified it with property (1), which actually corresponds to a form of quiescent consistency from distributed computing [19]. However, such reading of eventual consistency does not allow making conclusions about the behaviour of the database in realistic scenarios,
    when updates never stop arriving
    . [Emphasis added.]
  • Conflict resolution policies. To satisfy the requirement of availability, geo-replicated systems have to allow making updates to the same object on different, potentially disconnected replicas. The systems then have to resolve conflicts, arising when replicas exchange the updates, according
    to certain policies, often encapsulated in replicated data types [27, 29]. The use of such policies complicates the semantics provided by eventually consistent systems and makes its formal specification challenging.
  • Combinations of different consistency levels. Even in applications where basic eventual consistency is sufficient most of the time, stronger consistency may be needed occasionally. This has given rise to a wide variety of features for strengthening consistency on demand. Thus, some systems now provide a mixture of eventual and strong consistency
    [1, 13, 21], and researchers have argued for doing the same with different forms of eventual consistency [5]. Other systems have allowed strengthening consistency by implementing transactions, usually not provided by geo-replicated systems [14, 24, 30]. Understanding the semantics of such features and their combinations is very difficult.

The absence of a uniform and widely applicable specification formalism complicates the development and use of eventually consistent systems. Currently, there is no easy way for developers of such systems to answer basic questions when designing their programming interfaces: Are the requirements
of my application okay with a given form of eventual consistency? Can I use a replicated data type implemented in a system X in a different system Y? What is the semantics of combining two given forms of eventual consistency?

We address this problem by proposing a formal and declarative framework for specifying the semantics of eventually consistent systems. …

See also:


Denny Lee (@dennylee) described Updated HDInsight on Azure ASV paths for multiple storage accounts in a 3/25/2013 post:

imageIf you’ve joined the HDInsight Preview – you will notice many new changes including the tight integration with Windows Azure and that HDInsight defaults to ASV. As noted in Why use Blob Storage with HDInsight on Azure, there are some interesting technical (performance) and business reasons for utilizing Azure storage accounts. But if you had been playing with the HadoopOnAzure.com beta and switched over to the Windows Azure HDInsight Service Preview – you’ll may have noticed a quick change in the way asv paths work. Here’s a quick cheat sheet for you.

In general, to access ASV sources

#ls asv://$container$@$storage_account$.blob.core.windows.net/$path$

imageThe exception is the default container which was created when you originally setup your cluster. For example, my storage account is “doctorwho” and the container (which is the name of my HDInsight cluster) is “caprica” (Yes, I’m mixing Battlestar Galactica and Doctor Who – deal with it!):

#ls asv://caprica@doctorwho.blob.core.windows.net/

Yet because this is also the default container / storage account, you can also just go:

#ls /

image_thumb75_thumb1If you want to access another container in the same storage account, you’ll have to specify the entire statement. For example, if I wanted to access the rainier container, muir folder in my doctorwho account

#ls asv://rainier@doctorwho.blob.core.windows.net/muir

As well, if you want to access a completely separate storage account, provided you have specified the account information within the core-site.xml (more info below), then you can follow the same path. For example, if I wanted to access the ultimate container, frisbee folder in my riversong account:

#ls asv://ultimate@riversong.blob.core.windows.net/frisbee

imageNote, for the above to work, you will need to modify your core-site.xml and add a fs.azure.account.key.$full account path$ – the template would look like:

<property>
< name>fs.azure.account.key.$account$.blob.core.windows.net</name>
< value>$account-key$</value>
< /property>

For my riversong account, it would look like:

<property>
< name>fs.azure.account.key.riversong.blob.core.windows.net</name>
< value>$riversong-account-key$</value>
< /property>

Enjoy!


The Windows Azure (@WindowsAzure) team described How to Administer HDInsight Service in a 3/22/2013 post:

imageIn this topic, you will learn how to create an HDInsight Service cluster, and how to open the administrative tools.

Table of Contents
How to: Create a HDInsight cluster

image_thumb75_thumb1A Windows Azure storage account is required before you can create a HDInsight cluster. HDInsight uses Windows Azure Blob Storage to store data. For information on creating a Windows Azure storage account, see How to Create a Storage Account.

Note

imageCurrently HDInsight is only available in the US East data center, so you must specify the US East location when creating your storage account.

1. Sign in to the Management Portal.

2. Click + NEW on the bottom of the page, click DATA SERVICES, click HDINSIGHT, and then click QUICK CREATE.

3. When choosing CUSTOM CREATE, you need to specify the following properties:

image

4. Provide Cluster Name, Cluster Size, Cluster Admin Password, and a Windows Azure Storage Account, and then click Create HDInsight Cluster. Once the cluster is created and running, the status shows Running.

HDI.QuickCreate

The default name for the administrator's account is admin. To give the account a different name, you can use the custom create option instead of quick create.

When using the quick create option to create a cluster, a new container with the name of the HDInsight cluster is created automatically in the storage account specified. If you want to customize the name, you can use the custom create option.

Important: Once a Windows Azure storage account is chosen for your HDInsight cluster, you can neither delete the account, nor change the account to a different account.

5. Click the newly created cluster. It shows the summary page:

HDI.ClusterSummary

6. Click either the Go to cluster link, or Start Dashboard on the bottom of the page to open HDInsight Dashboard.

HDI.Dashboard

How to: Open the interactive JavaScript console

Windows Azure HDInsight Service comes with a web based interactive JavaScript console that can be used as an administration/deployment tool. The console evaluates simple JavaScript expressions. It also lets you run HDFS commands.

  1. Sign in to the Management Portal.
  2. Click HDINSIGHT. You will see a list of deployed Hadoop clusters.
  3. Click the Hadoop cluster where you want to upload data to.
  4. From HDInsight Dashboard, click the cluster URL.
  5. Enter User name and Password for the cluster, and then click Log On.
  6. Click Interactive Console.

    HDI.TileInteractiveConsole

  7. From the Interactive JavaScript console, type the following command to get a list of supported commands:

    help()

    HDI.InteractiveJavaScriptConsole

    To run HDFS commands, use # in front of the commands. For example:

    #lsr /
How to: Open the Hadoop command line

To use Hadoop command line, you must first connect to the cluster using remote desktop.

  1. Sign in to the Management Portal.
  2. Click HDINSIGHT. You will see a list of deployed Hadoop clusters.
  3. Click the Hadoop cluster where you want to upload data to.
  4. Click Connect on the bottom of the page.
  5. Click Open.
  6. Enter your credentials, and then click OK. Use the username and password you configured when you created the cluster.
  7. Click Yes.
  8. From the desktop, double-click Hadoop Command Line.

    HDI.HadoopCommandLine

    For more information on Hadoop command, see Hadoop commands reference.

See Also

The Microsoft “Data Explorer” Preview for Excel team (@DataExplorer) posted Mashups and visualizations over Big Data and Azure HDInsight using Data Explorer on 3/18/2013:

imageWith the recent announcement about Azure HDInsight, now is a good time to look at how one might use Data Explorer to connect to data sitting in Windows Azure HDInsight.

As you might be aware, HDInsight is Microsoft’s own offering of Hadoop as a service. In this example, we are going to look at how data can be consumed from HDInsight. We will be using a dataset in HDInsight that contains historical stock prices for all stocks traded on the NYSE between 1970 and 2010. While this dataset is not too big compared to “Big Data” standards, it does represent many of the challenges posed by big data as far as end user consumption goes. In case you are interested in trying this yourself, the source of this data is Infochimps. You will need to get the data into an Azure Blob Storage account that is associated with your HDInsight cluster.

imageThe goal of this post is to show you how to use this data to build a report of those stocks that are traded on the NYSE and are part of the S&P 500 index. This dataset by alone isn’t enough, because all it provides is price and volume information by stock symbol and date. So we will need to ultimately find company name/sector information from another dataset. More on that later.

Here’s a view of the report we are attempting to build:

clip_image002

The steps below show you how to build this interactive report.

Step 1: Connect to HDInsight and shape the data

The first step is to connect to HDInsight and get the data in the right shape. HDInsight is a supported data source in the Data Explorer ribbon.

imageIf you are following along and don’t see HDInsight in the Other Sources dropdown, you need to get the latest update for Data Explorer.

clip_image004

Once the account details are provided, we will eventually be connected up to the HDFS filesystem view:

clip_image006

At this point, you will likely notice the first challenge. The data that we want to consume is scattered across dozens of files – and we absolutely need the data from all of these files. However, this is an easy problem for Data Explorer.

As a first step, we need to use Data Explorer to subset the files based on a condition, so that all unnecessary files are filtered out. We can use a condition that filters down to the list of files that contain “daily_price” in the filename.

clip_image008

After this step, we have just the files needed. Now for the magic.

Data Explorer has a really cool feature that lets you create a logical table out of multiple text files. You can “combine” multiple files in a filesystem view by simply clicking on the Combine icon in the Content column header.

clip_image010

Clicking on that icon produces this:

clip_image012

At this point, one of the top rows can be promoted as header using the feature Data Explorer provides for creating a header row. The rest of the header rows can be filtered out also using the filter capabilities.

clip_image014

A few more operations to hide unwanted columns will produce our final view:

clip_image016

Clicking on Done will start to run the query and stream the data down into Excel.

Step 2: Find the S&P 500 list of companies along with company information

At this point, the data should be streaming down into Excel. There is quite a bit of data that will find its way down into Excel. However, since we are not done with data shaping, we need to toggle a setting that will disable evaluation/download of the results for this particular query.

Clicking on Enable Download stops the download:

clip_image018

In order to fully build out the report, the next thing we need is the list of companies that are part of the S&P 500 index. We can try to find this in Data Explorer’s Online Search.

Searching for S&P 500 yields a few results:

clip_image020

The first one looks pretty close to the data we need. So we can import that into Excel by clicking Use:

clip_image022

Step 3: Merging the two tables, and subsetting to the S&P 500 price data

The last step in our scenario is to combine the two tables using Data Explorer. We can do this by clicking on the Merge button in the Data Explorer ribbon:

clip_image024

The Merge dialog lets us pick the tables we’d like to merge, along with the common columns between the two tables so that Data Explorer can do a join. Note that a left outer join is used when merging is done this way.

Clicking on Apply completes the merge, and we are presented with a resulting table:

clip_image026

Columns from the second table can be added by expanding and looking up columns from the NewColumn column:

clip_image028

The result of selecting the columns we’d like to add to the table produces this:

clip_image030

Note that there are many columns that have null values for the new columns. This is expected as we have more companies in the left table (the one we pulled from HDInsight).

A simple filtering out of nulls fixes the problem, and leaves us with what we need:

clip_image032

We are now left with the historical end of day figures for all companies in the S&P 500. Clicking on Done will now bring the data into Excel.

Step 4: Fix up a few types in PowerPivot and visualize in Power View

Once the data is downloaded, adding it to Excel’s data model (xVelocity) is easy. Clicking on the Load to data model link puts the data into the model in one click:

clip_image034

Once the data is in the data model, it can easily be modeled using Excel’s PowerPivot functionality.

clip_image036

In order for the visualization to work correctly, we need to adjust the types of the following columns in PowerPivot:

  • date – this column needs to be converted to Date type
  • stock_price_close – this column needs to be converted to Decimal type

Once this is done, adding the visualization via Power View is easy.

  • We can insert a Power View from the Insert ribbon/tab in Excel.
  • Delete any default visualizations. From the table named Query in the Power View fields list, select the following columns:
    • date
    • stock_price_close
    • NewColumn.Company
  • Change the visualization to a Line chart.

This will produce a visualization that shows all the companies on a single chart. We can then customize the chart and see just the companies we are interested in.

clip_image037

That’s all it takes to get all that data from HDInsight, and to combine that data with some publicly available information. Data Explorer’s Online Search is a good source for public data.

We hope this gives you an idea for how Data Explorer enables richer connectivity, discovery and data shaping scenarios while enhancing your Self Service BI experience in Excel. An interesting thought exercise will be to consider how you might accomplish this scenario without using Data Explorer.

Let us know what you think!

image_thumb1


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Nick Harris (@cloudnick) described how to Build Websites and Apache Cordova/PhoneGap apps using the new HTML client for Azure Mobile Services on 3/18/2013 (missed when published):

imageToday Scott Guthrie announced HTML client support for Windows Azure Mobile Services such that developers can begin using Windows Azure Mobile Services to build both HTML5/JS Websites and Apache Cordova/PhoneGap apps.

The two major changes in this update include:

  • imageNew Mobile Services HTML client library that supports IE8+ browsers, current versions of Chrome, Firefox, and Safari, plus PhoneGap 2.3.0+. It provides a simple JavaScript API to enable both the same storage API support we provide in other native SDKs and easy user authentication via any of the four supported identity providers – Microsoft Account, Google, Facebook, and Twitter.
  • Cross Origin Resource Sharing (CORS) support to enable your Mobile Service to accept cross-domain Ajax requests. You can now configure a whitelist of allowed domains for your Mobile Service using the Windows Azure management portal.

image_thumb75_thumb2With this update Windows Azure Mobile Services now provides a scalable turnkey backend solution for your Windows Store, Windows Phone, iOS, Android and HTML5/JS applications.

HTMLClient

To learn more about the new HTML client library for Windows Azure Mobile Services please see checkout the new HTML tutorials on WindowsAzure.com and the following short 4 minute video where Yavor Georgiev demonstrates how to quickly create a new mobile service, download the HTML client quick start app, run the app and store data within the Mobile Service then configure a custom domain with Cross-origin Resource Sharing (CORS) support

Watch on Channel9 here

If you have any questions please reach out to us via dedicated Windows Azure Mobile Services our forum.

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

image_thumb8No significant articles today


<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

Vittorio Bertocci (@vibronet) posted A Refresh of the Identity and Access Tool for VS 2012 on 3/25/2013:

imageToday we released a refresh of the Identity and Access Tool for Visual Studio 2012.

We moved the current release # from 1.0.2 to 1.1.0.

VS should let you know that there’s a new version waiting for you, but if you’re in a hurry you can go here and get it right away.

image_thumb75_thumb3We didn’t add any new features: this is largely a service release, with lots of bug fixes.

There is an exception to that, though. We changed the way in which we handle issuer validation of incoming tokens. We now use the ValidatingIssuerNameRegistry by default; however we also added in the UI the necessary knobs for you to opt out and fall back on the old ConfigBasedIssuerNameRegistry, should you need to. Details below.

The New Issuer Validation Strategy

Traditionally, WIF tools (from fedutil.exe in .NET 3.5 and 4.0 to the Identity and Access Tools in .NET 4.5) used the ConfigBasedIssuerNameRegistry class to capture the coordinates (issuer name and signing verification key) of trusted issuers. In config it would look as something like the following:

<issuerNameRegistry type="System.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, 
    System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
  <trustedIssuers>
     <add thumbprint="9B74CB2F320F7AAFC156E1252270B1DC01EF40D0" name="LocalSTS" />
  </trustedIssuers>
</issuerNameRegistry>

Its semantic is straightforward: if an incoming token is signed with the key corresponding to that thumbprint, accept it (provided that all other checks pass as well!) and use the value in “name” for the “Issuer” property of the resulting claims.

That worked out great for the first generation of identity providers, but as the expressive power of issuers grew (multiple keys, multiple tenants as issuers leveraging the same issuing endpoint and crypto infrastructure) we felt we needed to provide a better issuer name registry canonical class, the ValidatingIssuerNameRegistry (VINR for short).

We already introduced VINR here, hence I won’t repeat the details here. What’s new is that the Identity and Access Tool now uses VINR by default.
If you run the tool against the project containing the settings above, afterwards your config will look like the following:

<!--Commented by Identity and Access VS Package-->
<!--<issuerNameRegistry type="System.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, 
System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<trustedIssuers><add thumbprint="9B74CB2F320F7AAFC156E1252270B1DC01EF40D0" name="LocalSTS" />
</trustedIssuers></issuerNameRegistry>-->
<issuerNameRegistry type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, 
    System.IdentityModel.Tokens.ValidatingIssuerNameRegistry">
 <authority name="LocalSTS">
  <keys>
    <add thumbprint="9B74CB2F320F7AAFC156E1252270B1DC01EF40D0" />
  </keys>
  <validIssuers>
    <add name="LocalSTS" />
  </validIssuers>
 </authority>
</issuerNameRegistry>

Apart from the syntactic sugar, the important difference in semantics between the two is that whereas the ConfigBasedIssuerNameRegistry will just use “LocalSTS” as the Issuer property in the ClaimsIdentity representing the caller, regardless of what the issuer is in the incoming token, VINR will enforce that “LocalSTS” is actually the issuer name in the incoming token. If the issuer in the token is different from the value recorded, ConfigBasedIssuerNameRegistry will accept the token nonetheless: VINR will refuse it. The stricter validation rules are necessary when working with multitenant STSes, and are not a bad thing for traditional cases either (ADFS2.0 does this consistently).

If for any reasons you rely on the ConfigBasedIssuerNameRegistry more relaxed validation criterion, I would suggest considering whether you can move to a stricter validation mode: but if you absolutely can’t, the tool offers you a way out. In the config tab you will now find the following new checkbox:

image

If you want to go back to ConfigBasedIssuerNameRegistry, all you need to do is unchecking that box and hit OK.

Miscellanea

Here there are few of the most notable fixes we (well, “we”… it actually was Brent ) added in this refresh. The list is longer, here I am highlighting just the ones for which we received explicit feedback in the past.

  • <serviceModel> bug. We had a bug for which the Tool would throw if a <serviceModel> element was present in the config; that behavior has been fixed
  • We weren’t setting certificate validation mode to none for the Business STS providers, but we got feedback that self signed certificates are in common use and developers needed to turn off cert validation by hand; hence, we included all providers in the cert validation == none logic and added a comment in the config to clarify that this is for development purposes only.
  • Better support for ACS namespace keys. We did a flurry of improvements there (better cut & paste support, comments, gracefully handling projects for which we don’t have keys, etc)
  • More informative error messages

That’s it. We hope that the improvements in this refresh will help you with your apps: please keep the feedback coming!

No significant articles today

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, Traffic Manger, RDP and CDN

Jim O’Neill (@jimoneil) produced Practical Azure #16: Windows Azure Traffic Manager on 3/12/2013:

imageLike what you heard? Try Windows Azure for FREE and enjoy the freedom to use your preferred OS, language, database or tool. Windows Azure can help you deploy sites to a highly scalable environment, deploy and run virtual machines, and create highly scalable application in a rich PaaS environment. Give it a try!

_________________

Abstract:
image_thumb75_thumb4In Part 16 of his Windows Azure series, Jim O’Neil breaks down Windows Azure Traffic Manager. Tune in as he describes how Traffic Manager allows you to control the distribution of user traffic to Windows Azure hosted services as well as demos a scenario in which you can easily manage and coordinate various cloud services across datacenters and geographies.

After watching this video, follow these next steps:

imageStep #1 – Try Windows Azure: No cost. No obligation. 90-Day FREE trial.
Step #2 – Download the Tools for Windows 8 App Development
Step #3 – Start building your own Apps for Windows 8

Subscribe to our podcast via iTunes or RSS

image_thumb11


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

image_thumb75_thumb5No significant articles today

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image_thumb6image_thumbNo significant articles today

 


Return to section navigation list>

Windows Azure Infrastructure and DevOps

image_thumb75_thumb6No significant articles today


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today


<Return to section navigation list>

Cloud Security, Compliance and Governance

image_thumb2No significant articles today

 


<Return to section navigation list>

Cloud Computing Events

Craig Kitterman (@craigkitterman, pictured below) posted Windows Azure Community News Roundup (Edition #59) on 3/22/2013:

imageEditor's Note: This post comes from Mark Brown, Windows Azure Community Manager.

Welcome to the newest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure.

image_thumb75_thumb7Here is what we pulled together for the past week based on your feedback:

Articles, Videos and Blog Posts

Upcoming Events and User Group Meetings

North America

Europe

Rest of World/Virtual

Code on GitHub

Interesting Recent Windows Azure Discussions on Stack Overflow


<Return to section navigation list>

Other Cloud Computing Platforms and Services

image_thumb111No significant articles today

 


<Return to section navigation list>

0 comments: