Archive for the ‘Cloud Computing’ Category

Scalr 1.1.0 – getting in touch – Part II

Friday, August 21st, 2009

As I promised in a further post I document here in my blog further steps and experiences in running Scalr on an own server infrastructure.

After Scalr was set up as described in my first post, some additional configurations have to be done to get in detailed touch with Scalr 1.1.0.
First log in as admin user to the Scalr frontend and create a new system user (“Client”) and activate it. After successful login with the new created and active user, you have to enter your AWS credentials and upload your public and private key from your AWS account.

The next screenshot show the configuration within my user.
scalr system user AWS settings

Be aware to set this configuration right otherwise you will not be able to proceed.


The Dasein Cloud API – (another) promising approach?

Monday, August 17th, 2009

Today i found a promising news article about another Cloud API (The Dasein Cloud API – O’Reilly Broadcast). I became aware of this article because Georg Reese is behind this project. He wrote the excellent book Cloud Application Architectures – I can highly recommend it. The Dasein Cloud API is supported by enStratus, a company founded by Georg Reese. The Dasein Cloud API project site is hosted at sourceforge.

What’s about the Dasein Coud API?

Dasein Cloud provides a cloud-independent interface in Java for accessing cloud resources. If you are writing an application to manage your cloud infrastructure, you write the calls against the Dasein Cloud API without having to learn the specifics of the web services calls from different providers like Amazon Web Services and Rackspace. Cloud providers can then provide cloud-specific implementations of the API that simply plug in to your application without any need for changing your code. The model is very much like JDBC.

This approach sounds very intersting, but is from my point of view on the providers and players of cloud computing services. Georg mantoined that also in his article.

enStratus has made available its Amazon Web Services implementation both as a reference implementation and a working system for Dasein Cloud + AWS. enStratus will be releasing implementations for other providers, and hopefully providers will begin providing their own implementations.

I think about the motivation for providers to contribute to the Dasein Cloud API. Is it not likely that a provider focus on its own API sets first? I think only with a large distribution and from a certain awareness, a provider will be motivated to provide his own implementations. I’m excited about the Dasein Cloud API future progress and which providers would provide their own implementations.

It’s clear that a common standard for a independent Cloud API in general would have a positive impact for all players in Cloud Computing…

The following independent “Cloud API approaches” I have researched so far amongst others for my Master Thesis:

What are other promising projects of this kind? – yet another Url Shortener built with Google App Engine

Sunday, August 16th, 2009

Since Twitter and other microblogging services became so pupular, url shortener (e.g.,,, etc.) are experiencing an other renaissance and are very popular again.

To build your own URL shortener within minutes you only need:

wmshorty is an Url Shortener for Google App Engine. You only have to follow the fews steps described on the project site, to set up wmshorty with Google App Engine. I used my own domain within Google Apps as described here. You have to configure your own domain within Google Apps. There is no other possibility to do that – for me a, at that point, disadvantage of Google App Engine – the PaaS of Google. Maybe in the future there will be other ways to configure own domains within Google App Engine…

If you are using WordPress you might be interested in this related article “ — shorten your links“. This article is about WordPress own url shortener “”. I will use my own url shortener , built like desrcibed above to get some performance results within Google App Engine which results (if these are meaningful) will be used in my Master Thesis research.

What The Hell Is Cloud Computing?

Wednesday, August 12th, 2009

A classic: IT’s “enfant terrible” – Larry Ellison explain Cloud Computing in his irresistible nature (found on youtube):

If you ask yourself, is Larry Ellison really anti-cloud computing? This good post by William Hurley gives you a possible answer:

… Larry’s comments about cloud computing remind me of the times he bashed virtualisation back in the day. Everyone spread similar rumours then, and the transition from hatin’ to embracin’ looked almost identical.

In other words, Larry didn’t have an informed opinion the first time he was asked about virtualisation. Once he did, his story quickly changed from disparaging virtualisation to announcing Oracle VM, and eventually acquiring companies like Virtual Iron. So far my sources say the alleged cloud computing reversal is the same situation…

John Willis Guest Blogging at Force of Good – Government’s Gone Cloud

Wednesday, August 12th, 2009

John Willis posted a cool guest article on Lance Weatherby’s weblog. The introduction to the post:

Last week the first sentence of an article in the InformationWeek periodical specifically targeted at IT employees of the U.S. Government read as follows:

‘The General Services Administration has issued a Request For Quotation for cloud storage, Web hosting, and virtual machine services.’

You can read the whole post here.


Scalr 1.1.0 – getting in touch – Part I

Thursday, August 6th, 2009

Today my friend and business partner Raphael pointed me to the new release of Scalr. I knew Scalr from the past, but i did not get in detailed touch yet. Scalr released version 1.1.0 under the GPL v2- and now I thought about to give it a detailed try.
Scalr promise alot of value within Cloud Computing and the use of Amazon EC2.

Scalr is a fully redundant, self-curing and self-scaling hosting environment using Amazon’s EC2.

It allows you to create server farms through a web-based interface using prebuilt AMI’s for load balancers (pound, nginx, or Amazon’s load balancing service), app servers (apache, rails, others), databases (mysql master-slave, others), and a generic AMI to build on top of.

The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it.

Multiple AMI’s are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one.

Under is also a pay service available, but i want to build my own environment.

If you want to install Scalr the wiki is a good starting point.
I droped some lines here in my blog to document my installation on a Ubuntu 9.04 Server.

Systemrequirements are definied on the project website as follows:

  • PHP 5.2.5 or higher
  • MySQL 5.0 or higher (MySQL 5.1 or higher preferred)
  • My server installation was build up with a LAMP. A good How-To for LAMP installation you can find here.

    I had to customize my PHP5 installation for the required PHP extension listed in the project wiki. I searched for the plugins with following command
    apt-cache search php5-*, and installed the required extensions manually.

    Furthermore I created a database for scalr and a valid user for it.

    # mysql -u -p
    Enter password:
    Welcome to the MySQL monitor. Commands end with ; or \g.
    Your MySQL connection id is 41
    Server version: 5.0.75-0ubuntu10.2 (Ubuntu)
    mysql> CREATE DATABASE ;
    Query OK, 1 row affected (0.00 sec)

    mysql> GRANT ALL PRIVILEGES ON .* TO ""@"localhost" IDENTIFIED BY "";
    Query OK, 0 rows affected (0.00 sec)

    Query OK, 0 rows affected (0.01 sec)

    mysql> EXIT


    Ian Foster – What’s faster–a supercomputer or EC2?

    Wednesday, August 5th, 2009

    I found this interesting article about a comparison today, you should read the whole article here:

    • On EC2, I am told that it may take ~5 minutes to start 32 nodes (depending on image size), so with high probability we will finish the LU benchmark within 100 + 300 = 400 secs.
    • On the supercomputer, we can use Rich Wolksi’s QBETS queue time estimation service to get a bound on the queue time. When I tried this in June, QBETS told me that if I wanted 32 nodes for 20 seconds, the probability of me getting those nodes within 400 secs was only 34%–not good odds.
    •, Ian Foster, Aug 2009


    These comparisons depends from my point of view always on experienced data and mainly on your various scenario Neutral declarations are hard to find. I asked Thijs Metsch for some experienced data of RESERVOIR project he mentoined in Twitter. Looking forward for an answer from him…
    Does anyone else have some own experienced data for starting EC2 images in different environments and with various scenarios?

    Fireside Chat with Greg Papadopoulos & Werner Vogels

    Wednesday, August 5th, 2009

    I watched an absorbing discussion about Cloud Computing on cloudbooknet.

    Former keynote speakers Greg Papadopoulos, CTO and EVP, Research and Development at Sun Microsystems and Werner Vogels, VP and CTO at, return to share their thoughts on the rise of Cloud Computing and what direction they see Amazon and Sun leading the evolution of the Cloud Computing industry and the opportunities it generates.

    There are some more videos of Werner Vogels you can check out on website – i can also recommend his nice blog All Things Distributed.

    ElasticVapor :: Life in the Cloud: A Trusted Cloud Entropy Authority

    Tuesday, August 4th, 2009

    Ruv Cohen posted in his blog an intersting thought about a “Trusted Cloud Entropy Authority” ElasticVapor :: Life in the Cloud: A Trusted Cloud Entropy Authority

    Gordon says “How about getting signed entropy from a trusted server on the network/internet?”

    Gordon’s comments did get me thinking, maybe there an opportunity to create a trusted cloud authority to provide signed verified and certified entropy. Think of it like a certificate authority (CA) but for chaos. Actually, Amazon Web Service itself could act as this entropy authority via a simple encrypted web service call. I even have a name for it, Simple Entropy Service (SES).

    This idea is very exciting and useful. However, if you are to classical CA’s thinking as e.g. “Web Server Certificate” field, then i believe only an independent CA guarantees in such a position, future potential of Cloud Computing without a provider lock-in. The provider lock-in here refers not only to the CA itself, but also to pave the CA by a certified Provider / Services. In my view, therefore the target must be to create a largely self-sufficient CA, which also allows small businesses and companies to be able to offer certified and therefore “trusted” Cloud Computing services and resources without an expensive certification process. If you think for example on  Amazon EC2 Images, it should be possible in future to continue creating an own AMI image but then also free from Amazon certify it. That would be a real added value – for Amazon as IaaS Provider and for us as AWS user and enabler.

    Whitepaper – Cloud Computing Use Cases

    Saturday, August 1st, 2009

    Via Twitter i got news from Reuven Cohen about a interesting whitepaper. He also mentioned it in his blog.
    ElasticVapor :: Life in the Cloud: IBM’s Crowd-Sourced Cloud Computing Use Cases White Paper Published: “‘The goal of this white paper is to highlight the capabilities and requirements that need to be standardized in a cloud environment to ensure interoperability, ease of integration and portability. It must be possible to implement all of the use cases described in this paper without using closed, proprietary technologies. Cloud computing must evolve as an open environment, minimizing vendor lock-in and increasing customer choice.’”

    Thanks to Reuven who linked this document to Scribd.

    Cloud Computing Use Cases Whitepaper

    If you want to get more information or take part in discussion and conversation join google group cloud-computing-use-case. I look forward to the response and feedback from the community to that whitepaper excited. My personal feedback could be followed at mentioned google group.

    InfoClipz: Cloud Computing short introduction video

    Wednesday, July 29th, 2009

    If you are new to the topic Cloud Computing and you are confused about some abbreviations in common use like IaaS, PaaS or SaaS take a look at this non technical short introduction video from infoworld:

    InfoClipz: Cloud computing | Cloud Computing – InfoWorld

    Posted using ShareThis

    Cloud Computing Potentials

    Tuesday, July 28th, 2009

    In June i created with Raphael Volz a Cloud Computing Discussion document named “Cloud Computing Potentials”. I want to introduce to that document here in my weblog:

    Cloud computing is one of the major trends in IT and promoted by vendors as a major vehicle to improve IT services and reduce cost. Our report objectively analyses the potential of cloud computing for small and large organizations and provides a sound and objective analysis not dulled by the promises of vendors or the technology hype.

    We conclude that Cloud computing applies best practices of enterprise computing that have emerged in the last decades. New is the paradigma to aggregate several physical servers into one abstract computing resource that can be used dynamically and scales gracefully by adding new ressources.

    We present two cases showing that cloud services (IT-Services operated by the IT-vendor) are particularly attractive for SMEs, not so much for larger organizations. Also we conclude that public cloud infrastructure such as Google Apps and Amazon Web Services are only cheaper, if inhouse infrastructure utilization is low. We show that high resource utiliziation is needed and tight control of KPIs is required to become a successfull cloud computing provider.

    However, also for IT deparments applying cloud computing architecture principles makes inhouse IT-infrastructure more controllable and is therefore a reasonable computing paradigm. We conclude in our report with showing how Open Source Software can be used to building an own cloud computing platform.

    The first chapter can be read here. If you are interested in that document please feel free to contact me under “the-cloud (at)” .