CALL +44 (0)20 7183 3893
Blog

Thursday 27 January 2011

Cloudreach Vs Google Vs Amazon Web Services

On the track of course!

Here's what happened when the Cloudreach team and a few budding racers from Google and Amazon met on the racing circuit.

Cloudreach Technical Director and Co-Founder, James Monico won everything. We all suspect he'd been having secret lessons in the run up to it. We couldn't see him for dust....or should that be a cloud? Nevermind, that was a bad joke.

The Cloudreach Annual Nora Batty Convention


There were no losers...apart from those who didn't win!

Wednesday 26 January 2011

The Cloud Vs Grid!!!

Some of the major issues with grid computing at the moment are:
  1. Its static nature: Users who require access to computational resources need to make a request to the resource providers in order to host their apps on their nodes, which then becomes available as a service to anyone with the right login credentials.
  2. Data transfer time: Data needs to be transferred manually to any resource that requires it. This could take a long period of time, further delaying the start of the execution of the process.
  3. Difficulties with fine-grained parallelism: Due to the latency involved in inter-node communications with the grid, approaches using fine-grained parallelism, using communication paradigms such as MPI or OpenMP are rendered impracticable on the grid. This limits the remit of applications run on the grid.
However, Cloud Computing can be used to resolve many of these issues:
  1. Pay-as-you-go paradigm: Cloud computing allows for resource billing on a usage basis.
  2. Data transfer: As data on Infrastructure as a Service (IaaS) providers can be moved by reference, it is possible to transfer data (once on the cloud) to the node(s) that require it by reference.
  3. Fine-grained parallelism: With HPC on the public Cloud becoming a reality with offerings such as Amazon’s Cluster Compute, it is possible to purchase time on a virtualised cluster that may be used for tightly-coupled parallelised processes (for instance, parallel computing applications for bio-informatics that cannot be solved using the Map/Reduce model, such as the construction of a Bayesian network.
As Cloud Computing can solve several of the issues inherent in grid computing as we have seen above, there is an emergent need to wrap up several of the tools that may be used for (distributed and HPC) scientific programming into an SDK, overlaid with a workflow management system. This could be provided as a Platform as a Service.

The most interesting part of such an SDK, from this perspective, would be the aforementioned workflow management system*. The workflow management system could enable the automation of the resource provisioning, requesting and obtaining the right type of resource - a cluster on the Cloud, or a number of nodes without spatial locality - for the type of process that is to be executed.
This approach could potentially be more convenient and cost-effective than using the grid because:
  • an automated approach to resource allocation would save the time consumed in resource request and provisioning in the "static" grid,
  • the Cloud can be used for processes that use either coarse- or fine-grained parallelism,
  • small commercial organisations in a variety of domains (Oil and Gas, Bioinformatics, etc) can save on the immense cost of purchasing cluster hardware and training users (who may be scientists without an informatics background) in the use of the cluster, and
  • IaaS providers like Amazon provide credits for the use of resources on the public Cloud for research in educational institutions.
*It
is a moot point if we may refer to a workflow management system as part
of an SDK. An IDE for the Cloud might perhaps be mot juste in this
case.

Tuesday 25 January 2011

Amazon SES: A Quick First Look

Amazon Web Services seem to pull out new cards from their collective sleeves almost every single day. This is of course a source of great joy at Cloudreach HQ - in addition to enabling us to provide better value-added services to our clients, the geeks among us like it because it’s yet another toy to have a play with.

Today’s toy is Amazon Simple Email Service (Beta), a highly scalable and cheap bulk email sending service. This came to us at quite an interesting time as we had just built a Postfix email server for one of our clients to send bulk emails from, and having to battle ISPs who were suspicious of our client’s motives.

Amazon SES takes away a large amount of the complexity involved in sending bulk emails; you do not need to build your own email server to be able to send bulk emails.

As SES is still in Beta, you cannot access it from the console. You can however use a bunch of perl scripts they provide you with to get started. The rest of this blogpost describes how this may be done:

  1. Sign up for Amazon SES. You can do this by visiting http://aws.amazon.com/ses
  2. You will now have access to the Amazon SES developer sandbox using the perl scripts provided. The perl scripts use the Access Key and Secret Access Key for authentication. Copy them into a file as follows, and make sure the file’s permissions are set to 600 (Linux).AWSAccessKeyId=xxxxxxxxxxxxxxx
    AWSSecretKey=xxxxxxxxxxxxxxxxxxx


    See the
    Getting Started guide here from AWS for details.
  3. However, to get the perl scripts working on Ubuntu, you will need to install the following modules:libxml-libxml-perl
    libcrypt-ssleay-perl
    perl-doc
    libxml2-dev

  4. You will then need to verify your email address and any other email addresses you wish to send emails to (because you’re still sandboxed, you cannot send emails to email addresses you haven’t verified). The developer’s guide has details on how to do this.
  5. To send emails to all of your verified email addresses, you can use this simple Python wrapper script I’ve knocked together. You can also download it from our public Amazon S3 bucket by clicking here.
To learn how to execute this file, type python send_email.py -h.

Now that you’re done playing with the SES sandbox, you might want to request Amazon Web Services to grant you production access by filling this form up. AWS indicate most requests will be approved within 24 hours; however, we can’t comment on that just yet because we got our hands on this just a couple of hours ago!!

Siddhu Warrier

Friday 21 January 2011

Use your Android to control AWS

There are two ways you can use the AWS API from your Android-powered device: using the Java SDK or, alternately, using the RESTful API. However, The AWS Java SDK for the Android (Beta) is a relatively recent development, and does not yet support accessing, monitoring, and manipulating Amazon EC2 instances.

Using the RESTful API to painstakingly build this functionality is probably not worth it, as the AWS SDK for Android is likely to include it in the foreseeable future, at which time you’ll probably have to migrate all of your code over. But if you can’t wait, there is a way to get the AWS Java SDK to play with your Android.
The principal reason why you can’t use the AWS Java SDK out-of-the-box on your Android device is the AWS SDK’s use of the Streaming API for XML (STaX), which is in the javax namespace. The Dalvik VM used on the Android does not include the whole Java stack, principally to conserve space. Additionally, as a developer, you are not, by default, allowed to bundle in libraries using the java or javax namespaces into your application - for good reason; as your application can potentially end up incompatible with future versions of Android that may include the selfsame library.\
So, if you try to include the AWS SDK JAR and all of its dependencies, including the STaX API, into your application, you will be faced with the following error upon compilation
[2010-10-24 14:11:43 - ElasticDroid]
Attempt to include a core class (java.* or javax.*) in something other
than a core library. It is likely that you have attempted to include
in an application the core library (or a part thereof) from a desktop
virtual machine. This will most assuredly not work. At a minimum, it
jeopardizes the compatibility of your app with future versions of the
platform. It is also often of questionable legality.
[...]
But if you really want to do it (you do, don’t you?!), there’s a workaround. Before I describe the workaround, let me first make the caveats of this approach clear:
  • You could be running the risk of having your application fail to work with a future version of Android (you’re fine with Gingerbread).
  • You won’t be able to use the ADT builder on Eclipse. You will have to use Ant.
  • You may need to change your build file every time there is a new Android SDK, as Google often change the build.xml templates without documenting it.

So, the workaround! You can use the --core-library option to force the Android build system to allow you to accept the STaX JAR into your application. The rest of this article describes how you can do this.

Assumptions

We make the following assumptions here:
  • Your source code is in the src/ directory.
  • Your external libraries (JARs) are in the lib/ directory.
  • You are using revision 8 of the Android SDK.
Generate build.xml
To generate a build.xml file for your project, fire up a terminal, and type
cd path/to/project
android update project -p . --target android-8
This should create/update the files:
  1. default.properties
  2. local.properties. Warning: do not commit this file into version control; this includes the path to the location of the Android SDK on your machine!
  3. build.xml
Modify build.xml
The build.xml file created by default imports most stuff from a template file. The easiest way to customise your build.xml file is to copy over the contents of the template file into your newly created file.
To do so,
  1. Look for the end of the section setup; marked
  2. Change to read
  3. Open the template file in the Android SDK using your favourite text editor.
    1. WARNING: There are several different template files lying about in the Android SDK folder. Some of them will cause your ant build to die a miserable NullPointerException death. This took me hours to figure out!
    2. For SDK r08, the file is:

path.to.android.sdk/tools/ant/main_rules.xml
  1. make sure jar.libs.dir is set to the directory containing your AWS JAR. I.e., you should have the following line in your code:
  2. Now, this step is the nub! Add the --core-library option into the dex-helper macrodef. Additionally, add a fileset directive to make sure your JAR files are packaged in

  <macrodef name="dex-helper">
<element name="external-libs" optional="yes" />
<element name="extra-parameters" optional="yes" />
<sequential>
<echo>
Converting compiled files and external libraries into ${intermediate.dex.file}...
</echo>
<apply executable="${dx}" failonerror="true" parallel="true">
<arg value="--dex" />
<arg value="--core-library" />
<arg value="--output=${intermediate.dex.file}" />
<extra-parameters />
<arg line="${verbose.option}" />
<arg path="${out.dex.input.absolute.dir}" />
<fileset dir="${jar.libs.absolute.dir}" includes="*.jar" />
<path refid="android.libraries.jars" />
<external-libs />
</apply>
</sequential>
</macrodef>
  1. Optional: Scroll up to the top of build.xml and set debug as the default build target.
Test your new build script

If you have an Android device, plug it in and type ant install into the terminal. This should then install the Android app that is using the AWS SDK on to your device.

Alternately, you can fire up an emulator (sort of) quickly by typing:
emulator -avd YourEmuName -timezone "Europe/London" -no-boot-anim

To learn how to create an emulator device, please see Android’s documentation.

Finally, if you’re looking for a working example of this hack, take a look at ElasticDroid (http://code.google.com/p/elastic-droid).

Monday 17 January 2011

Elastic IP on Boot - Not too Much of a Stretch

It is quite usual to have a EC2 instance with a tied Elastic IP you need to reboot/stop from time to time. Unfortunately for you every time you do so, the elastic IP association is lost, forcing you to manually reattaching it again.

There is however a very easy way to solve this problem, by just using a script at boot. Just like a normal Unix service.
/etc/ec2autoeip.conf (Config file sample)

AWS_ACCESS_KEY = 'xxxxxxxxxxxxxxxxxx'
AWS_SECRET_ACCESS_KEY = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy’'
IP = 'ww.xx.yy.zz'


/etc/init.d/ec2autoeip (The actual service script code)

#!/usr/bin/python
import sys
import urllib
import boto.ec2
execfile("/etc/ec2autoeip.conf")
instance_id = urllib.urlopen("http://169.254.169.254/latest/meta-data/instance-id").read()
zone = urllib.urlopen("http://169.254.169.254/latest/meta-data/placement/availability-zone").read()[:-1]
conn = boto.ec2.connect_to_region(zone,
aws_access_key_id = AWS_ACCESS_KEY,
aws_secret_access_key = AWS_SECRET_ACCESS_KEY,
is_secure = False)
if len(sys.argv) > 1 and sys.argv[1]=="start":
conn.associate_address(instance_id,IP)
print("Attaching " + IP + " to " + instance_id)
elif len(sys.argv) > 1 and sys.argv[1]=="stop":
conn.disassociate_address(IP)
print("Deattaching " + IP)


Finally is time to register the service to run at boot, in Debian and Ubuntu you can do it like that:

root@somewhere# update-rc.d ec2autoeip defaults


This script uses the awesome boto python library so you will need to install it in order for the script to work properly.


Emilio Garcia
Cloudreach Senior Engineer

Friday 14 January 2011

Cloud Computing - The New Rock and Roll

So a new year has begun and its out with the old and in with the new. Well for the most part. It was during my post-Christmas clean-up yesterday that I found myself furtively slipping X-Factor’s latest offering into the lonely depths of my bottom CD drawer. Another act, another fad. Yet, they are called today’s Rockstars. Surely, such a term should be saved for the Bowies and REMs of this world, demonstrating resilience and evolution through the years, sticking it out when the going was tough and long before the recognition, hype and groupies set it? Well at least there is a rightful new chart-topper in the world of Business Intelligence and whats more, this one is here to stay. 

A recent study by Cisco IBSG(1), estimated that 12% of enterprise workload will run in the cloud by 2013 and that the cost-savings from using cloud services will benefit the UK economy alone to the tune of 30 Billion Euros (2) over the next five years. Yet, there was no Simon Cowell back in the 60’s when the earliest hint of utility computing fell on the deaf ears of a technologically primitive world. Only in the 90’s amidst the clang of Blur and Oasis was it that a major record label of the IT world, Amazon, signed up to the idea and put cloud computing to the wider audience with the launch of Amazon Web Services (AWS) in 2002. The rest is history and roll on five years to find even Google and IBM amongst the fans in the front row with their offerings of Google Apps(3) and IBM Smart Business Development and Test(4). 

The fact is, cloud computing has been around for a while and only now with years of sound iterative development and millions of trusted users, has the Hype finally set in. With the immediate forecast being a year-on-year growth rate of 43% (5), cloud computing’s fan-base is set to increase relentlessly. And boy is it going to be some show.

(1)
http://www.cisco.com/web/about/ac79/docs/wp/sp/Service_Providers_as_Cloud_Providers_IBSG.pdf

(2)
http://www.pcr-online.biz/news/35321/Cloud-to-benefit-UK-economy-by-30-billion-Euros

(3)
http://www.google.com/apps/intl/en/business/index.html

(4)
http://www-935.ibm.com/services/us/igs/cloud-development/

(5)
http://www.newinnovationsguide.com/Cloud.html


Fabien Flight
Senior Engineer - Cloudreach

Wednesday 12 January 2011

Building Your Cloud Model

More Information on costs for the different types of Cloud model
There are different types of Cloud services - SAAS (Software as a Service), PAAS (Platform as a Service), IAAS (Infrastructure as a Service). Each one has unique attributes when is comes to looking at costs, I've jotted down a few thoughts for each and I've included some costs from one of the market leaders in each area.

IAAS (Infrastructure as a Service)
Capital expenditure on an Infrastructure project is much cheaper in the Public Cloud both in terms of physical equipment, power / cooling, server licensing, hypervisor set-up and config etc. It is not unheard of to make 80% savings in comparison to setting up new servers and infrastructure on premise. There also big on going revenue savings from the time saved not managing the physical and virtual layer in your data centre. To give you an idea of how much it would cost to run a small Windows server instance in Amazon Web Services infrastructure 24/7 for year would cost approx £450 per annum. There are even cheaper instances you can run for low impact servers called micro instances - you can run one of these for as little as £120 per annum and that includes the server O/S cost, although you need to provide the CALs. Storage is very cheap too.
Good reading Amazon Pricing

Note: You may have heard people talking about "Private Clouds" - as the name suggests these are private to your company, but you'll not see the costs savings with this model. Your company needs to buy and pay for all the hardware and software inc set-up. In fact this will probably be even more expensive than traditional IT.

PaaS (Platform as a Service)
Again you'll have no upfront costs with a provide such as Force.com (as offering from Salesforce.com). Their Force.com platform allows you to create your own be-spoke applications on their platform. It's a very powerful tool and a very quick way of getting applications out there and onto the web for little upfront costs. I've included a link below for the Force.com platform editions and costs - it's extremely cost effective.
Salesforce.com/platform/platform-edition/
Another platform which is also gaining popularity is Google's App Engine for Business. The costs for it can be seen here

In both instances you pay per user per year. There are no license costs, no hardware - just your development time on the platform.

SaaS (Software as a Service)
As with the other two models there are no upfront costs when looking at SaaS offerings, you pay on a subscription tariff paying per user. This gives you big savings over the traditional model.
Email / Docs - Google Apps @ £35 per person per annum
CRM - Salesforce @various

Tuesday 11 January 2011

How do Cloud Computing costs add up? Is it more complex to implement a Cloud solutions rather than 'Traditional' ones?

I was recently asked these two questions by a potential customer and thought I'd share some thoughts with you all as this is rather large topic.

How do Cloud Computing costs add up?

The biggest cost driver of moving to the Public (Multi Tenanted) Cloud is the avoidance of Capex investment day one or indeed in the longer-term when you need to refresh and upgrade. You only pay for what you consume and you have no tie ins, so when you finish with something you simply switch it off and stop paying. The TCO (Total Cost of Ownership) of an application or server or storage is very compelling indeed in the cloud - it's like consuming IT as a utility - flick the switch on and off as your business needs it. It stretches your budgets further and enables IT to deliver much more for less - you can focus your team on important projects which deliver competitive advantage to your organisation rather than managing standard business applications / systems which don't differentiate you from your competition.

Is it more complex to implement a Cloud solutions rather than 'traditional' ones?

To answer the second point, the implementation of Cloud technology is now well established and proven. The key points here are to use a specialist third party to help you and guide you in the implementation (it will save you a great deal of time and give you confidence and support) plus take advantage of the 'API' (Application Programming Interface) driven architecture - what I mean by this is the inter-operability and ease of migration which this gives you. Most Cloud offerings come with an API, which enables tools and functionality to move your data in and out with ease and link / integrate them together. In many cases moving to a Cloud offering is much easier than upgrading your applications / infrastructure in your traditional infrastructure. Here is an overview of some API technology Wikipedia.org/wiki/Cloud_APIs

If you have specific requirements or just want to talk about the Cloud, give me a shout and I'd be happy to talk it through. It's a big topic! And after hearing this, I'm happy to say we welcomed yet another forward thinking customer into The Cloud.

Tom Ray
Head of Operations
Cloudreach
Email: tom.ray@cloudreach.co.uk
Twitter: @cloudreach

Monday 10 January 2011

Tweet Tweet

Why not follow us on twitter to get regular updates from the Cloudreach team @cloudreach

Hello World!

Hello World!

Welcome to the Cloudreach blog, where you will find insight, comment and opinion on all matters related to cloud technology.

We are proud of our knowledge of Google Apps and Amazon Web Services in particular, and in the spirit of community aim to share some of our learnings in the hope that one day we can go on a journey together.

As one of the oldest Google Enterprise and AWS partners we have seen it, done it, and collected many a t-shirt along the way, and it is this experience that we hope will prompt you to subscribe to this blog for future updates.

Unless you have unplugged all the cables, and live in a Faraday cage you cannot have missed the hype surrounding cloud computing. We urge you to cut through the chaff to see those companies that are genuinely innovating and bringing something new to the table that fundamentally changes the way that business is done. We are one such company!

Over that last 10-years corporate processes and knowledge have been pushed into web apps, in order to ease administration. Pushing the server component outside the organisation is the next logical step in simplifying things even further.

This blog is written and maintained by the technical team at Cloudreach who sit on the crest of the cloud technology wave. As trusted consultants we work on a wide range of IT technologies with companies in pretty much every business sector. This experience gives us knowledge that is useful to our customer community and the concerned public in general, and with appropriate consent we release articles and insights that result from these engagements. In line with our values of openness, we ask engineers to write articles regularly as part of their daily bread. We also warmly invite you to comment on the articles that are posted in order to improve the quality of the information that is written here for the benefit of all.

Looking forward to blogging together.

James Monico
Cloudreach Technical Director
Pontus is ready and waiting to answer your questions