CALL +44 (0)20 7183 3893
Blog

Thursday 15 December 2011

Going Google Roadshow - Edinburgh


On November 24th, Cloudreach and Google hosted the latest ‘Going Google Roadshow’ at Our Dynamic Earth in Edinburgh.  Around 50 attendees from across Scotland made it along to the event which featured presentations from Google, Cloudreach and our guest speaker, Kenny Craig, IT Director at AG Barr.  

AG Barr recently moved approximately 600 users to Google, so Kenny is very well placed to speak about the intricacies of Going Google.  Kenny provided an insight into the business challenges that prompted the decision to move, as well as the benefits and challenges of going through the process with Cloudreach.  

The feedback on this customer testimonial was excellent. The attendees felt it was extremely useful hearing the clients perspective directly, going beyond the marketing message.  We will be continuing this format for future events and excerpts from Kenny’s presentation can be found at the end of this post.

Brad Kilshaw, Enterprise Channel Manager at Google delivered the keynote, as well as a very interesting piece on Chromebook. Cloudreach’s very own Head of Operations, Tom Ray outlined the many benefits of choosing Cloudreach as your Google deployment partner.  Pontus Noren, Director and Co-founder of Cloudreach facilitated a very entertaining panel discussion at the end of the night, which as a bonus, featured Google’s EMEA Head of Partners, Mark Hodgson.  

All in all it was a great event .. Thanks to all who attended!  

The event generated so much interest that we have two more events in Scotland and another in London scheduled for early next year, all of which will be held in collaboration with Google Enterprise.

Dates of upcoming Going Google Roadshow Events:
9th February 2012 - Glasgow
13th March 2012 - London
5th April 2012 - Edinburgh

Please contact carol.rashti@cloudreach.co.uk if you wish to reserve a place, or for any enquiries.


All the best,

Cloudreach Team




Wednesday 14 September 2011

The Release Into Production .. ChipsAway Phase 2


And here we stand.

6 months, 148 classes, 20,000 lines of Java (and 3000 of XML!) after we started, we have come to the point of releasing ChipsAway Phase 2, a project that has kept several of us on our toes for so long, into production.
This is a good time to look back at the six months just past, rather like stout Cortez when with eagle eyes He star'd at the Pacific (On First Looking into Chapman's Homer - John Keats). While there have been periods when we felt rather like the six hundred who rode into the valley of death (Charge of the Light Brigade - Alfred, Lord Tennyson), this release was what one would have described at the outset as a consummation devoutly to be wished (Hamlet, Prince of Denmark - William Shakespeare)
While there is still Phase 3 to look forward to, one cannot help but feel a tinge — make that a veritable dollop — of sorrow at the much lamented passing of our teammate Rodolfo (no, not literally!). Rodolfo, thank you very much for all of your hard work, and we hope to work with you again at some point in the future.
While this blogpost is lovely, dark, and deep, I have promises to keep, and miles to go before I sleep (Stopping by Woods on a Snowy Evening , Robert Frost). Therefore, it shall suffice to say that we at Cloudreach are immensely (and ordinately, if that is a word) proud of how we did, finally, Make It Happen!

Siddhu Warrier
Senior Engineer, Cloudreach

Monday 15 August 2011

Global Cloud Partnership


Cloudreach and Ltech form the Global Cloud Partnership

As leading providers of cloud-based IT products and services, Cloudreach Limited and LTech, based in the UK and USA respectively, announce the launch of the Global Cloud Partnership.

August 15, 2011, Cloudreach Limited and LTech, both Google Enterprise partners, are pleased to announce the world’s first international partnership, dedicated to connecting businesses to the cloud.

The Global Cloud Partnership has been formed with a vision to provide customers with a strong presence internationally, and premium local support. Through the alignment of their service offerings and common solutions, clients will receive the same high quality service irrespective of the business’s geographic location.

"We are pleased to have Cloudreach as a global services partner.” said Jack Ryan, Managing Director at LTech. “The scope of this alliance allows us to maintain our delivery performance and quality support for our expanding base of global customers.”

As Leading providers of Google Apps, Cloudreach and LTech take pride in seamlessly transitioning businesses to the cloud. Both adopt the role of a trusted partner, working with organizations to successfully migrate, integrate and operate enterprise-class cloud computing solutions as a means of achieving strategic business goals.

"Until now the only option for finding a global cloud services provider was to turn to one of the legacy "big" integrators, who just aren't skilled in the specialist areas required to do the job well. The Global Cloud Partnership provides the best choice for any global enterprise looking to adopt cloud technologies. In setting up a global partnership, LTech was the obvious choice for us." said James Monico, Technical Director at Cloudreach.

Both companies have developed their own range of bespoke tools and procedures to facilitate and enhance their clients’ cloud solutions. The sharing of this knowledge through the Global Cloud Partnership, provides customers with an unrivalled level of expertise and access to best practise resources.

Moving to the cloud is a long term strategy rather than a short term solution. Together, Cloudreach and LTech will continue to provide the support system required to achieve the highest quality cloud experience.

About Cloudreach:
Cloudreach Limited, founded in 2009, is a cloud computing consultancy, based in London, UK, with extensive expertise around Google Apps and Amazon Web Services. Our main services include migration to Google Apps and AWS, managed services of AWS environments and Google Apps, and PaaS based custom application development.

Cloudreach is one of the most innovative companies in the fast growing market for cloud-based IT services, and is on the leading edge of market developments. Among it’s many successful deployments, Cloudreach’s clients range from a variety of sectors including architecture, financial services, Government bodies, and high-end fashion brands.

About LTech:
Founded in 2001, LTech is a leading provider of products and services focused on connecting business to the cloud. Our cloud deployment services and enablement products, such as LTech Power Panel for Google Apps and LTech Single Sign-On for Google Apps, deliver enterprise cloud computing to organizations of all sizes. 

As an early Google Enterprise Partner, LTech has successfully completed hundreds of Google Apps deployments and helped develop best-practices for adopting and successfully scaling cloud computing programs for large-scale customers in business, government and education. 

Cloudreach and LTech are both proud to be a Google Enterprise Partner™ and Amazon Web Services Solution Provider.

Issued on behalf of Cloudreach Limited & LTech
Cloudreach:
Pontus Noren, Director, Co-founder
+44 207 183 3893
www.cloudreach.com



LTech: 
Russ Young, Executive Vice President
919.766.0664
www.ltech.com



Thursday 30 June 2011

Microsoft Office I Want A Divorce, Google Apps Is My New Love

Dear ‘Paperclip’,

Firstly, yes, it appears that I am writing a letter, and secondly no, I don’t need any assistance. But thanks for asking. Again.

This is going to be difficult, we’ve been together so long. We’ve created so much. Achieved great things but I’ve met someone else. Someone refreshingly easy to get on with. Someone I find it easier to share with. Someone that allows me to look back at all those changes I’ve gone through without me having to open up every one that didn’t work out. Everything’s so clear and I know exactly where I am. No more raking over old ground looking for an answer. And you change so much, year on year and when we go over things in the past, try and work on things, the newer you just won’t accept it.

We can do all the things that we used to do but easier and from anywhere. I’m not tied to one place anymore. I’m free to share things with other people. And most importantly I no longer have a massive drain on my finances.

My new love is so free, allows me to travel - enjoying being where I am anywhere. You’ve got so much baggage in the cupboard I keep having to come back to check it’s all working.

You’ve made some effort to try and catch up with where I’m going, what I need but it’s just not good enough.

So I’ve moved on. You know how suspicious I am of change but this change has been so easy, we already work so well together.

On this occasion it’s definitely not me - it’s you.

Keep trying ‘Paperclip’, but as for you and me, it’s over.

xx

Wednesday 16 March 2011

A small step in the cloud, a giant leap for the datacentre

Yesterday Amazon Web Services announced the ability for their Virtual Private Cloud (VPC) product to send and received traffic directly from the internet as well as that routed via the private site-to-site connection to your on-premise router. Although this may seem like a small step forward, it is in truth a transforming feature in the maturity of cloud infrastructure. To date the features inherent to virtualisation that underpin AWS and other clouds have rewarded adopters with an abstraction of hardware from their running machines. The benefit include mobility (in the case of a hardware problem) and fully atomic backups that are very useful when rolling out major patch updates or introducing significant change on a core service. The main limiting factor for anyone wishing to deploy complex enterprise environments into AWS has been the flexibly of the networking elements. Those who require network segmentation, outbound network ACLs and inter-network connectivity were limited by the fact that unless you were inside VPC you had limited control of the subnet in which your machines run. The downside of running in VPC was that all traffic then had to route over your site-to-site connection to your router. With this announcement you are now free to enjoy an enhanced VPC that allows secure access from the internet and a high-throughput secure connection to your datacentre.

From a Cloudreach point of view this means the near-death of our much loved LAN-to-cloud VPN service that we use in production and disaster recovery deployments. Although this OpenVPN based service is in the main part superseded, it still has a place in environments where terminating infrastructure does not support BGP peering within IPSEC tunnels. As a company we very much welcome this extension of the AWS feature set as it moves the platform on and keeps it well out of reach of its nearest competitors who were already scrabbling to keep up with the existing feature set. In our mind it opens up the possibility of the truly virtual datacentre with features that match or exceed the functionality of the legacy best practise Cisco/VMWare/NetApp solutions, without the hassle of running and maintaining complex kit.

For those of us who are used to seeing our servers and Network Attached Storage as lines on a web-page, we can get excited about the future where as well as our servers we will be able to add virtual network appliances into our subnets which exhibit equivalent functionality to traditional equipment. I say watch out Cisco …  the world is going virtual, and a vendor will emerge with equivalent functionality to your GSR that is designed as a software only product to run within cloud infrastructure. The Cisco Nexus project is a great idea, but you still need a bloody great bit of Cisco kit to control your virtual appliance. Something seems wrong about that! Our current excitement is for companies like Zeus who provide a world-class Layer 7 switch extending their offering to offer Layer 3-6 functionality in this virtual world.

The functionality released into AWS means that almost all deployments from now on will run within VPC containers and many of these will use the advanced networking without the site-to-site secure link back to your datacentre, just to take benefit of fixed IP addresses.  Up until now the default gateway of any of your instances had to be the AWS routing infrastructure, and not your own server acting as a router, which made deploying things like client VPNs for mobile workers a little tricky. You can now create multiple subnets within a VPC deployment and control the inbound and outbound network traffic that transits between them. It just makes you think, why bother doing this stuff yourself?

In the history of Amazon Web Services we see this announcement as even more fundamental than the Dec 2009 update that your virtual servers could now be persistent and stopped and started freely. This small step in the AWS cloud is truly a giant leap for the datacentre as we know it.

James Monico
Technical Director

Monday 14 February 2011

On Soldiers and Servers

Cloud computing provides an unparalleled opportunity for reducing the risks involved in buying infrastructure for everything from small businesses to enterprises. One can requisition only the capacity that is needed, as and when it is needed. The problem of deciding how much capacity to buy for any given moment is one that requires understanding and reasoning with the uncertainties involved in traffic patterns.

Many things in life are uncertain. Managing this uncertainty is an important part of planning in real-world operations, often phrased in terms of risk (the expected benefit or cost across all outcomes). Risks to IT services come in many forms from the common — such as hard disk failure — to the unusual — such as large earthquakes or asteroid strikes. Probability theory and statistics provide tools for reasoning with uncertain situations and are commonly used to estimate and balance risk and so maximise the probability of a successful outcome.

Contracts, too, also explicate risks and who bears responsibility for them. Commercial contracts are often given conditions stated statistically in the form of Service Level Agreements (SLAs). These agreements include requirements for up-time being over a certain percentage, or the proportion of incidents or problems that are dealt with in a certain time.
By exploring the connection between the causes of death of a group of soldiers in the Prussian cavalry and traffic levels on web-servers, this post describes one way that probability theory may be applied to capacity planning, with the goal of meeting some SLA.

The Commonality of Rare Events

In 1898, a Russian statistician called Ladislaus Bortkiewicz was attempting to make sense of rare events. For our current purposes, a rare event is one that is individually unlikely but has a lot of opportunities to happen, such as mutations in DNA or winning the lottery. He was looking at the number of soldiers in the Prussian cavalry who were killed by horse-kicks; the probability of any given soldier being killed by a horse kick was low, but in the cavalry there are lots of occasions where one could be kicked to death by a horse (whether deserved or not). The question was, how can one estimate the probability that a given number of soldiers would be killed by their horses in a year given the statistical data we have about how many were killed each year on average?
 
The tool that Bortkiewicz used, and the theme of his book ‘The Law of Small Numbers’, was the Poisson distribution, a probability distribution with one parameter: the average number of events in the given period.
Assuming an average of 10 soldiers were killed each year, the Poisson distribution can be plotted:




The horizontal axis is the number of events observed (all integers) and the vertical axis is the probability of that number occurring according to the distribution. As can be seen, the highest probability is associated with the mean number of events (ten), but there is a spread of other counts that have non-negligible probability.
The probability that a number of events, k, occurs when the mean is  can be calculated as follows:


Pr(k|)=kk!e-.

The probability that six men were killed by their horses is then:

Pr(6|10)=1066!e-10=0.0631 (3sf).

The Poisson distribution has since been used to model many other situations such as the spread of epidemics and hardware failure, all of which are ‘rare’ events in the sense above. Traffic to websites can also be modelled using the Poisson distribution; there are large number of browsers in any given period, and the probability of any of them visiting a given site is relatively low. It is this that will allow us to answer some questions about traffic to a site that we have responsibility for maintaining.

Estimating Required Capacity

After that lengthy tour, we return to our original problem: capacity planning for a certain load. We want to estimate the probability that our maximum capacity (in requests per second, rps) is greater than the load we’ll receive. All we can directly estimate is the expected amount of traffic that our server will receive (again in requests per second) and our maximum capacity (through load testing or other means).
Since web traffic may be treated as obeying a Poisson distribution, our problem can be stated as finding the probability that the observed load is less than or equal to our maximum capacity. This is the definition of the cumulative distribution function of the Poisson distribution, for maximum capacity k (an integer) and expected load :


Pr(xk|)=(k+1,)(k+1).

As an example, imagine that we are expecting 50 rps, and have a maximum capacity of 60 rps. The probability that the observed load is less than or equal to 60 rps is then 0.928 to three significant figures, unlikely to meet most commercial SLAs. If we increase our capacity, through improving the code or provisioning more machines, to 70 rps then the probability of being able to handle the observed load is now 0.997 (to three significant figures), which may be enough to meet our commitments.

Conclusion

We have seen that probability theory and statistics can provide useful tools for capacity management. By modelling our situation using a simple probability distribution, we have gained an improved ability to quantify the risks involved in providing capacity for different levels of service. One can use this distribution to decide how much capacity to buy for any given level of demand, allowing one to use the cloud to adapt one’s infrastructure with confidence.
Unsurprisingly, there are lots of opportunities for using these tools in other areas of service management. All IT infrastructure is uncertain, and it is only by embracing this uncertainty and working with it that we can mitigate the risks involved in IT strategy, design, deployment and operation. 


Joe Geldart
Senior Engineer
Cloudreach

Friday 11 February 2011

Keep Calm and Carry On - Microsoft Windows License Activation in AWS

The AWS Cloud has now become a familiar pat of our cloud existence but some of you may have come across a few problems that are not immediately obvious from start of use. One particularly funny (as in ‘strange’, not ‘haha’) problem you can find in Amazon is the Genuine Advantage Program from Microsoft. This tells you that your instance’s licence is not valid.
First impressions on seeing this are usually, “Wasn't this supposed to be managed by Amazon?” and obviously “Why are they asking me to activate my Windows license?” The fact is that Amazon actually does take care of this problem.
Namely:
  • On first start up Amazon Ec2WindowsActivate service registers the copy of Windows and sets the activation server link.
  • Later the instance can reconfirm its license by connecting these servers.
At the time of writing these are:
us-east-1:
us-west-1:
eu-west-1:
ap-southeast-1:

All now seems pretty logical - but then... “Why am I getting the black wallpaper, and why doesnt Windowsre-activate?” The key point is that DNS names can only  be resolved by the internal Amazon DNS. So if you change your Windows DNS servers to some others (your Active Directory ones for example) your server won't be able to resolve it.

So far there are two solutions;
  • You can restore Amazon DNS in the machine configuration, or   
  • You run manually (or scripted) 'slmgr.vbs /skms IP ' and 'slmgr.vbs /ato'  commands in CLI after resolving the DNS request against the Amazon DNS and getting the actual internal IP you instance should point to.

Hope that helps you all Cloud users.


Emilio Garcia
Senior Engineer

Wednesday 9 February 2011

More Clouds In The Sky

Behold, the latest additions to the Cloudreach family. It's no wonder we're scouring London for a new HQ, we've got Cloud Consultants in working in cupboards, some in the kitchen fridge (both shelves) and we've locked one in a drawer...just need to remember which one!

New hires since mid December:

Friday 4 February 2011

The Cloud = High Performance Computing

The cloud is a perfectly fitting platform for solving many high-performance computational problems. It may be actually cheaper and may offer faster return of results than traditional clusters, for both occasional tasks and periodic use.

For a number of years, science and analytics users have been using clusters for high-performance computing in areas such as bioinformatics, climate science, financial market predictions, data mining, finite element modelling etc. Companies working with vast amounts of data, such as Google, Yahoo! and Facebook, use vast dedicated clusters to crawl, index and search websites.

Dedicated Company Clusters.  
Often a company will own it’s own dedicated cluster for high-performance computations. The utilisation will likely be below 100% most of the time as the cluster needs to be scaled for peak demand, e.g. overnight analyses. The cluster will likely rapidly become business-critical, and it may become difficult or prohibitive to schedule longer maintenance shutdowns: hence the cluster may become running on outdated software. If the cluster has been growing in ad-hoc fashion from very small, there will occur a critical point, when any further growth requires disruptive hardware infrastructure upgrade and software re-configuration or upgrade i.e. a long shutdown. This may actually not be an option or carry an unacceptable risk.

Shared institutional clusters
In the case of a shared cluster (such as UK’s HECToR) the end users will likely face availability challenges:
  • There may not be enough task slots in the job pool for “surge” needs
  • Job queues may cause the job to wait for a few days
  • Often departments will need to watch monthly cluster utilisation quotas or face temporary black-listing for the job pool
Clusters Are Finite and Don’t Grow On Demand
Given the exponential nature of growth of data that we process, our needs (e.g. an experiment in Next Generation Sequencing) may simply outgrow the pace with which the clusters keep pace.

The Cloud Alternative

For those who feel constrained by the above problems, Amazon Web Services offer a viable HPC alternative:
  • AWS Elastic Compute Cloud (EC2) brings on-demand instances
  • The recently (late 2010) introduced AWS Cluster Compute Instances are high-performance instances running inside a high-speed, low-latency sub-network
  • For loosely coupled, easily parallelised problems, AWS Elastic Map Reduce is the offering of Hadoop (version 0.20.2), Hive and Pig as a service (well integrated into the rest of AWS stack such as the S3 storage).
  • For tightly coupled problems, Message Passing Interface, OpenMP and similar technologies will benefit from fast network
  • For analysis requiring a central, clustered, database, MySQL is offered as a service called AWS Relational Database Service (RDS), with Oracle DB announced as next
The Downside of The Cloud Approach: The Data Localisation Challenge (& Solutions). The fact that customer’s data (potentially in vast amounts) need to get to AWS over public Internet, is a limiting factor. Often the customer’s own network may be the actual bottleneck. There are 2 considerations to make:
  • Many NP-complete problems are actually above the rule-of-thumb break-even point for moving data over slow link vs. available CPU power (1 byte / 100,000 CPU cycles)
  • Often the actual “big data” are the reference datasets that are mostly static (e.g. in bioinformatics, reference genomes). AWS contains a number of public datasets already. For others, it may make sense to send the first batch of data for upload to AWS on a physical medium by post, although later only apply incremental changes.
Martin Kochan
Cloud Developer
Cloudreach

Thursday 27 January 2011

Cloudreach Vs Google Vs Amazon Web Services

On the track of course!

Here's what happened when the Cloudreach team and a few budding racers from Google and Amazon met on the racing circuit.

Cloudreach Technical Director and Co-Founder, James Monico won everything. We all suspect he'd been having secret lessons in the run up to it. We couldn't see him for dust....or should that be a cloud? Nevermind, that was a bad joke.

The Cloudreach Annual Nora Batty Convention


There were no losers...apart from those who didn't win!

Wednesday 26 January 2011

The Cloud Vs Grid!!!

Some of the major issues with grid computing at the moment are:
  1. Its static nature: Users who require access to computational resources need to make a request to the resource providers in order to host their apps on their nodes, which then becomes available as a service to anyone with the right login credentials.
  2. Data transfer time: Data needs to be transferred manually to any resource that requires it. This could take a long period of time, further delaying the start of the execution of the process.
  3. Difficulties with fine-grained parallelism: Due to the latency involved in inter-node communications with the grid, approaches using fine-grained parallelism, using communication paradigms such as MPI or OpenMP are rendered impracticable on the grid. This limits the remit of applications run on the grid.
However, Cloud Computing can be used to resolve many of these issues:
  1. Pay-as-you-go paradigm: Cloud computing allows for resource billing on a usage basis.
  2. Data transfer: As data on Infrastructure as a Service (IaaS) providers can be moved by reference, it is possible to transfer data (once on the cloud) to the node(s) that require it by reference.
  3. Fine-grained parallelism: With HPC on the public Cloud becoming a reality with offerings such as Amazon’s Cluster Compute, it is possible to purchase time on a virtualised cluster that may be used for tightly-coupled parallelised processes (for instance, parallel computing applications for bio-informatics that cannot be solved using the Map/Reduce model, such as the construction of a Bayesian network.
As Cloud Computing can solve several of the issues inherent in grid computing as we have seen above, there is an emergent need to wrap up several of the tools that may be used for (distributed and HPC) scientific programming into an SDK, overlaid with a workflow management system. This could be provided as a Platform as a Service.

The most interesting part of such an SDK, from this perspective, would be the aforementioned workflow management system*. The workflow management system could enable the automation of the resource provisioning, requesting and obtaining the right type of resource - a cluster on the Cloud, or a number of nodes without spatial locality - for the type of process that is to be executed.
This approach could potentially be more convenient and cost-effective than using the grid because:
  • an automated approach to resource allocation would save the time consumed in resource request and provisioning in the "static" grid,
  • the Cloud can be used for processes that use either coarse- or fine-grained parallelism,
  • small commercial organisations in a variety of domains (Oil and Gas, Bioinformatics, etc) can save on the immense cost of purchasing cluster hardware and training users (who may be scientists without an informatics background) in the use of the cluster, and
  • IaaS providers like Amazon provide credits for the use of resources on the public Cloud for research in educational institutions.
*It
is a moot point if we may refer to a workflow management system as part
of an SDK. An IDE for the Cloud might perhaps be mot juste in this
case.

Tuesday 25 January 2011

Amazon SES: A Quick First Look

Amazon Web Services seem to pull out new cards from their collective sleeves almost every single day. This is of course a source of great joy at Cloudreach HQ - in addition to enabling us to provide better value-added services to our clients, the geeks among us like it because it’s yet another toy to have a play with.

Today’s toy is Amazon Simple Email Service (Beta), a highly scalable and cheap bulk email sending service. This came to us at quite an interesting time as we had just built a Postfix email server for one of our clients to send bulk emails from, and having to battle ISPs who were suspicious of our client’s motives.

Amazon SES takes away a large amount of the complexity involved in sending bulk emails; you do not need to build your own email server to be able to send bulk emails.

As SES is still in Beta, you cannot access it from the console. You can however use a bunch of perl scripts they provide you with to get started. The rest of this blogpost describes how this may be done:

  1. Sign up for Amazon SES. You can do this by visiting http://aws.amazon.com/ses
  2. You will now have access to the Amazon SES developer sandbox using the perl scripts provided. The perl scripts use the Access Key and Secret Access Key for authentication. Copy them into a file as follows, and make sure the file’s permissions are set to 600 (Linux).AWSAccessKeyId=xxxxxxxxxxxxxxx
    AWSSecretKey=xxxxxxxxxxxxxxxxxxx


    See the
    Getting Started guide here from AWS for details.
  3. However, to get the perl scripts working on Ubuntu, you will need to install the following modules:libxml-libxml-perl
    libcrypt-ssleay-perl
    perl-doc
    libxml2-dev

  4. You will then need to verify your email address and any other email addresses you wish to send emails to (because you’re still sandboxed, you cannot send emails to email addresses you haven’t verified). The developer’s guide has details on how to do this.
  5. To send emails to all of your verified email addresses, you can use this simple Python wrapper script I’ve knocked together. You can also download it from our public Amazon S3 bucket by clicking here.
To learn how to execute this file, type python send_email.py -h.

Now that you’re done playing with the SES sandbox, you might want to request Amazon Web Services to grant you production access by filling this form up. AWS indicate most requests will be approved within 24 hours; however, we can’t comment on that just yet because we got our hands on this just a couple of hours ago!!

Siddhu Warrier

Pontus is ready and waiting to answer your questions