Tuesday, October 23, 2012

Approyo Announces Hana Post Production Support

Approyo's always available PoC’s and support approach automatically allows the business process, end-user, application, datacenter and cloud perspectives, providing unmatched support across the entire Hana solution

PRLog (Press Release) - Oct 23, 2012 - 
NEW YORK, NY - October 23, 2012– Approyo the leader in SAP Hana business transaction-driven Proof of Concepts & Support offerings, today announced a new Post Production Support (PPS) solution to serve its growing client base in the Hana arena. The new (PPS) offering is part of a prestigious demand from the SAP eco-system customers, and key to the company's growth and strategy. Clients and partners such as SAP, CSC, and Booze Allen Hamilton just to name a few accounts have engaged Approyo for the solution.
Read more here...

Saturday, September 22, 2012

SAP Hana POC's


SAP Hana POC Through the Cloud
Hana Proof of Concept NOW!
The Silver Lining® Cloud for your SAP Hana Proof of Concept allows you to jump right into your research without waiting for hardware or time to build an on-site system. You can immediately deliver capabilities to numerous staff members for assessment and acclimation. We designed our Silver Lining® solution to be seamless to your environment.

Sign up and go through a normal SAP log on GUI pad or on an Ipad or thru the web. You will have full Hana Studio access to a fully configured SAP Hana Cloud that allows you to grasp the new Hana solution, build upon knowledge needed in the complex issue of SAP, all while not having to install, service or support the solution.

Companies using SAP are engaged in evaluation, of the new Hana in memory solution. 
As companies strive to justify and leverage these investments, executive management is looking for the following –
• Reliability in testing for your POC
• ROI from the investment in new tools and applications (Try it before you buy it)
• Higher levels of acceptance, utilization and production from its user and technical communities

What is needed is the option to access a fully functional SAP Hana system where SAP users can construct Proof of Concept and Conference Room Pilot environments to experiment with this new SAP functionality – with full control and usability and no disruption to your production environment!  Approyo has invested over two years working hand in hand with SAP experts in creating a reliable, fully functional SAP Hana cloud solution to address this need.

This offering will enable any SAP client to access their own fully functional SAP Hana  environment for Proof of Concept and experimentation purposes.

Your “cloud” is highly secured and only accessed/used by your organization and requires NO internal resources (systems/ or technical staff) to utilize (Only to do a Hana Studio implementation).

This solution is intended for non-production utilization as an alternative to buying/supporting incremental servers whereby we handle the hosting of your Hana cloud instance –in 17 minutes!! As confirmed by select trial users of this offering, “the Silver Lining SAP Cloud Solution” adds tremendous value throughout the SAP POC process and is ideal for rapid, low-cost access to a fully functional Hana environment.

Contact Approyo to confirm your interest in deploying our Silver Lining SAP Hana cloud solution. We will discuss your interest and requirements for using the solution and provision your own SAP clouds. A POC term for Hana is typically been 90 days.

Companies today are adopting Hana to drive higher value to their users while stretching increasingly scarce IT dollars. Approyo is committed to assisting these companies in pursuit of this goal and strive to provide meaningful solutions that drive value into the SAP landscape.

Thursday, June 28, 2012

How we built a Customisable Cloud Hadoop using MapR and Brooklyn on EC2


In our recent blog post Customisable Cloud Hadoop: Automating MapR on EC2 using Brooklyn we demonstrated using Brooklyn and MapR to create a powerful and flexible Cloud Hadoop Cluster.
In this post we reflect on how we went about writing it, including some of the issues encountered. This post will be particularly useful if you are working through the source code.


Read more here.........

Friday, June 15, 2012

Cloudsoft Expands its Advisory Board with the Appointment of Major General David Shaw CBE


Former Director of Army Media and Communications to Drill Cloud Computing Start-up in Institutional and Government Markets

EDINBURGH, June 14, 2012 – Cloudsoft Corporation, a software innovator specialising in multi-cloud application management, today announced they have appointed Major General David Shaw CBE to their strategic advisory board. Shaw joins the board to help shape Cloudsoft‘s approach to government and institutional markets, and to provide guidance in media and communication.

Tuesday, June 5, 2012

How Enterprises Can Bend the Programmer Learning Curve

http://sandhill.com/article/how-enterprises-can-bend-the-programmer-learning-curve/

Tuesday, May 29, 2012

Clearing the Big Data Hurdle: The Open Source Advantage


By Christopher M. Carter, Cloudsoft Corporation
www.cloudsoftcorp.com
In today’s world there is a new understanding, the emergence of a new “reality” that is much, much different than what we had even a decade ago.  This new reality of big data that exists within today’s enterprises cannot be underestimated.  Big data is becoming more important in all industries, but none more so than in the finance arena, both in enterprises and big finance in Wall Street firms.  Most businesses aren't ready to manage this flood of data, much less do anything interesting with it.

Big data will impact every industry, from finance to education and government.  In fact, the Federal government just announced a new big data research initiative, with a budget of $200 million.
Data as a whole is a catalyst for business.  According to IDC, there will be 2.7 zeta bytes of data created this year alone.  Now, if you look into the enterprise, you begin to see that in order to begin analysing and deriving value from these increasingly large data sets, organizations need to embrace the right tools that will allow for these new capabilities.  As businesses begin to better understand their existing data, they can gain competitive advantage in the process, however, that competitive advantage can only be realized if data can be processed intelligently, efficiently and results delivered in a timely manner.
How does the enterprise begin to mine its data?  Good question.  With so much data existing that firms can become overwhelmed, how can the good data be identified?   What is “needed” data and what information is not as valuable?   The old mantra of “good data in, good data out; bad data in, bad data out” can help to start answering these questions.   All firms need to be cognizant, first and foremost, of the quality of the data being entered into their systems and used in daily operations. This is especially important in industries like finance, where data is the lifeblood of the business.
Opportunities abound in big data, and an organisation can get as much potential knowledge out of this stored data as they put energy into analysing it.  With applications spanning from Business Objects from SAP and the usage of in-memory data from Hana to newer applications, members of the finance sector are looking to add new positions like a Chief Data Officer specifically to make the key decisions around information that need to be made today.   Big data is indeed big, but it's not for all purposes.  For example, it’s not for transactional or real-time in-memory processing of small and endless streams of structured data.  Think of data like a big truck vs. a small sedan - each has its purpose.  However, both Big data and fast in-memory traditional databases have a place in driving business.
Opportunities in harnessing and utilising big data become more feasible when open source frameworks come into play.  The open source world has basically created the new age of big data analytics, be it when utilising Hadoop, the most widely used and well-known solution for developers, to products like Greenplum from EMC and others.  These tools have created a rush to market to support organisations trying to compute as much data as fast as they can with a solution that will allow them to make decisions in as real time as they can.  For example, a major retailer with outlets around the globe, utilising an open source framework, has the ability to harness the data coming in from their social media sites, run it through their enterprise data analytics solution, utilising literally thousands of nodes, to make real time decisions in their stores about products and pricing.
Three to five years ago this was not possible.  But, with the large and active open source community working on the framework this computation ability now exists and is being utilised and modified by new companies daily.
Corporations are looking at their data as an asset within their walls no matter where it physically resides, but yet there is still so much to learn and to dig through.  The new Chief Data Officer and their team must stay vigilant and be concerned about many factors that will directly impact the business, including how and what data is being provided to regulators.  Enterprises need to set standards when it comes to their information, and this is more important than ever in the increasingly regulatory-focused landscape.  Firms need to insure their internal processes are in place for current government regulatory requirements, as well as taking into account regulations in many of the new laws that are being created, seemingly on the fly.
There is no doubt that bringing the power of big data and harnessing its performance is important and that it will become more strategic when considering how organisations will use the data to interact with their clients, competitors and the market through faster decision-making.  Some companies will start to shrink under the pressure of this new data analysis, while some may indeed fail completely.  But regardless of which companies falter, and which ones gain market share, one thing is for certain: database companies should see tremendous gains as the need for more and more database applications increases.
Organisations are looking to the future and deciding how important a role big data will play in the coming years.  The truth is, how firms utilize big data as a source of knowledge and power will be the largest influence.  These enterprises that find success with adopting open source tools to analyse their information will see improved profitability, provide stronger service throughout the organisation and to their customers and rise above in the land of giants.

http://www.bigdataforfinance.com/bigdata/2012/05/clearing-the-big-data-hurdle-the-open-source-advantage.html

Thursday, May 10, 2012

Cloudsoft Application Management Platform Supports HP Cloud Services


Open Source Application Management Available in New Public Cloud Environment

NEW YORK and EDINBURGH, May 10, 2012 – Cloudsoft Corporation, a software innovator in multi-cloud application management, today announced their Application Management Platform (AMP) support for HP Cloud Services, HP’s new public cloud service.
As part of an agreement between Cloudsoft and HP, customers can use Cloudsoft’s open source Application Management Platform to add portability and runtime agility to applications running on HP Cloud Services (http://hpcloud.com), either as a standalone provider or as part of a hybrid cloud strategy. Customers can expect their cloud applications to operate in accordance with demanding requirements and service levels – without demanding the skills usually needed for sophisticated applications running in a private or hybrid cloud environment.

“Our Application Management Platform allows enterprises to use cloud, without losing control,” said Duncan Johnston-Watt (CEO & Founder, Cloudsoft), “Hats off to HP for creating an open and transparent, business-grade cloud while maximising the role that projects like OpenStack™, brooklyn and jclouds can play. HP’s open source ethos and the depth and breadth of its ecosystem will help drive adoption of cloud by enterprise users.”
Cloudsoft’s Application Management Platform supplies a control plane that can take full advantage of the HP Cloud Services open architecture and OpenStack API to rapidly compose cloud services by relying on well-defined, easily articulated policies, enterprise-grade cloud, combined with fully autonomic, policy driven, application management.
Unlike alternatives that focus on managing the underlying infrastructure, Cloudsoft’s Application Management Platform operates with an application-centric focus, using policies and an understanding of what the application requires, to drive infrastructure to exactly meet those needs – dynamically and in real-time. This includes enforcing jurisdictional constraints as well as ensuring compliance with business and service policies, for example, SLA’s, cost controls, QoS, application priorities and preferred service providers.

About Cloudsoft Corporation
Managing business in the cloud – Cloudsoft Corporation is a software company specialising in multi-cloud application management, enabling enterprises to better exploit the benefits of cloud computing: using cloud without losing control.
Cloudsoft’s Application Management Platform (AMP) enables enterprises to develop, deploy and manage large-scale distributed applications across multiple clouds, at lower cost, reduced complexity and minimal risk, while avoiding vendor lock-in.
Headquartered in the UK and venture funded, Cloudsoft’s seasoned executive team leverages an active presence in key open source communities and is backed by a world-class advisory board. Cloudsoft provides comprehensive professional open source support and services, and actively sponsors a number of open source projects.

Monday, April 2, 2012

A special Cloud Birthday gift to you


To celebrate our third birthday we are open sourcing our policy-based control plane, Brooklyn, under the Apache 2.0 license.

You can read all about it in our press release -


We already provide extensive professional open source support and services, actively sponsoring open source projects such as Apache Whirr and jclouds, therefore open sourcing Brooklyn is a logical next step for us.

In terms of the community this means we can now offer a comprehensive open source Application Platform Toolkit (APT) comprising Brooklyn, jclouds and Whirr that makes managing applications especially in a multi-cloud environment a breeze.

William Fellows, VP Research, 451 Group captures what we are trying to achieve in a provocative report published over the weekend PaaSification – use Cloudsoft's Brooklyn to create your own Force.com?

You can obtain a copy -


However while this is useful background what we'd really like you to do is join the Brooklyn open source community.

Therefore your call to action is to -

  1. Sign up to our google groups brooklyn-users and brooklyn-dev -
  2. Visit Brooklyn's new home -
  3. Join the party -
    • IRC: #brooklyncentral
    • Twitter: @brooklyncentral #brooklyncentral

Your support is essential as Brooklyn won't be the same without you :-)

Friday, February 3, 2012

Cloud Myth #5: Clouds Require Virtualization


Everywhere I look, I see clouds and virtualization mentioned together. They seem to be the peanut butter and jelly of the technology world. Certainly, clouds and virtualization taste good together, but surely we can separate them, right? Can you build a cloud without virtualization? Does peanut butter taste good without jelly? The short answer is, “You betcha,” but let’s examine why that’s true.
(I should probably note that I covered some of these ideas last year in my post titled “Internal Cloud vs. Virtualization: What’s the Diff?” You would do well to go back and read that material as well.)
The most important thing to keep in mind is that virtualization is primarily a description of technology. In particular, hypervisors are small software shims that slip between the physical machine hardware and a guest operating system and give the guest the illusion that it is running directly on the hardware. This allows the hypervisor to create this illusion for multiple guests at the same time, allowing multiple virtual machines to share the same physical hardware. Many IaaS clouds use virtualization. Certainly, all the major IaaS public clouds — AWS, Terremark, Rackspace, Fujitsu, Savvis, etc. — as well as most IaaS private clouds.
So, doesn’t that suggest that IaaS clouds and virtualization are inseparable? The answer is no, in the same way that finding a bunch of peanut butter and jelly sandwiches in an elementary school cafeteria at noon doesn’t imply that peanut butter isn’t tasty by itself (Thai chicken satay, anybody?).
The fundamental difference is that while “virtualization” describes a technology used to allow different virtual instances to share a common piece of hardware, “cloud” really describes an operating model for IT. So the next logical question is, “What’s an operating model?”
An operating model describes how an enterprise functions across process, operations, and technology domains in order to business deliver value. How do those things interact to create a desirable outcome? A cloud operating model is one focused on using clouds, as-a-service delivery, and agile IT to enable efficient IT utilization as the foundation for increased business value. At ServiceMesh we sometimes call this an “agile IT operating model” to indicate that it’s really about more than clouds, as well, but that’s another post altogether. Whether we call it a cloud operating model or an agile IT operating model, it’s important to note that an operating model can utilize a lot of different underlying technology for its implementation — it’s a completely generic notion at a top level.
Users of a cloud operating model access resources through self-service interfaces. They request resources with certain characteristics (amount of CPU, RAM, and disk space, for instance), and they receive access some time later (hopefully in less than a minute or two). But when they access those resources they don’t have any expectation of which exact physical machine they are using. From a user point of view, there is no way to tell (not totally true) that a given “server” they have requested is backed by a virtual machine or a physical machine. For all the user knows, the server might be a physical server in a huge server farm or it might be an “equivalent” virtual machine being run alongside others on a larger physical machine. At ServiceMesh, we call a cloud that uses physical systems without hypervisors a “bare metal cloud.”
“But, but…” I hear you cry, “Without a hypervisor don’t you lose a lot of manageability?” The answer is yes. But that’s not an argument for why clouds and virtualization are inseparable. You’re merely observing that some cloud implementations are more manageable than others. Virtualization brings many benefits, such as the ability to transparently shift workloads to different hardware within the cloud in a transparent fashion, for load balancing and disaster recovery (think of the live migration technologies supported by the various hypervisors).
But bare metal clouds also have interesting properties such as increased performance. If you want every CPU cycle your cloud can deliver (think large data analytics workloads), you might want to ditch the hypervisor and run your application directly on the metal. You can still manage your physical servers as a pooled resource within a cloud operating model, offering self-service access with accounting and charge-back of resource usage.
Interestingly, if you use an advanced cloud platform like Agility Platform, you can have multiple clouds of each type, virtualized or non-virtualized, running side-by-side, and you move applications and workloads from one to another. You might want to do this, for instance, to allow software development to occur in a lower-cost cloud using virtualization. That saves cost when you don’t really need performance. Then, when development and testing is completely, you can move the final workloads into production on a bare metal cloud. The only difference in user experience is the performance and cost.
So, while the cloud and virtualization combo is certainly as popular as peanut butter and jelly, don’t mistake popularity for inseparability. “Virtualization” describes one cloud implementation technology choice. “Cloud” describes an overall operating model. For many, bare metal clouds that eschew performance-stealing hypervisors are an interesting choice for some percentage of the cloud landscape.
Are you using or interested in bare metal clouds or do you see no value in them whatsoever? Weigh in with a comment and let us know.

Tuesday, January 3, 2012

Don't Let Next Year PaaS You By – Enterprise Cloud Trends for 2012


What do Virtualization and Cloud executives think about 2012? Find out in this VMblog.com series exclusive.
Don't Let Next Year PaaS You By – Enterprise Cloud Trends for 2012
Contributed Article by Derick Townsend, VP of Product Marketing, ServiceMesh
In 2011, we saw several enterprises launch their initial cloud deployments. Next year, these will evolve from mostly departmental efforts to new cloud initiatives that deliver broader enterprise business value. Here are five enterprise cloud trends that will help drive this in 2012:
1. Enterprises evaluate and adopt Private PaaS.
Enterprise IT has shown great interest in the benefits offered by Platform-as-a-Service (PaaS)-faster application development, standardized platforms at lower cost, scalability, embedded security, etc. But today's public PaaS offerings also come with serious downsides, including vendor lock-in. If you develop applications in Saleforce.com, Google AppEngine or a variety of others, you can't easily migrate them to another platform.
As a result, the momentum behind "private PaaS" offerings is building as we approach to 2012. With private PaaS, enterprises can assemble their own PaaS offerings with a cloud management platform and customize them as desired to specific requirements or their preferred combination of middleware, testing tools, and utilities. Private PaaS offerings are often initially deployed in a private cloud, and can take a variety of forms, from classic application development platforms to more specialized ones such as Hadoop, SAP, or custom versions of Cloud Foundry. Done right, a private PaaS can deliver the best of both worlds: agility and scalability benefits similar to a public PaaS, but leveraging existing tools and training used in the enterprise today.
2. Enterprises figure out that cloud governance must extend all the way out to business units.
In 2012, enterprises will continue extending self-service access to more IT resources and expand the cloud's reach out to business units. In the process, they'll discover that current cloud "policies" are too limited and need to go beyond defining role-based access or simple thresholds for scaling up resources. Instead, comprehensive cloud governance policies must also deal with the regulatory, compliance, security, and performance issues that drive the business units.
As a result, IT will adopt policy-driven governance models and solutions to consistently enforce this across the organization. This includes both management tools and organizational changes, such as federating governance and policy responsibilities across groups within the enterprise. For example, the corporate compliance officer needs to codify and enforce regulatory constraints using one set of policies. IT needs to enforce another set of policies related to managing resources and monitoring deployment environments. Finally, business units need to enforce application-level policies that define required performance levels and SLAs. All of these policies and their interactions need to be intelligently enforced across the enterprise's cloud-based services.
Next year, more IT organizations will recognize the full scope and importance of cloud governance. Nearer term, IT will see the benefits of enabling more end-to-end automation. Longer term, the realization will sink in that governance is a key driver to ensure that stakeholders derive the cost and agility benefits they expect out of the cloud-without risk of undercutting their core missions.
3. Enterprise cloud ecosystems will grow fast...and experience growing pains.
As cloud projects expand from departmental efforts to broader enterprise deployments, teams will have to integrate a much more complex ecosystem of infrastructure, legacy systems, third-party applications, custom tools/utilities, etc. Although many of today's cloud vendors claim to have an end-to-end solution, the reality is that many of these solutions require significant customization and integration to work out of the box. This may have been acceptable for limited, proof-of-concept scenarios, but these solutions will hit a wall when it comes to integrating into large enterprise deployments.
For example, providing security typically requires integrating solutions for federated identity management, encryption key stores, disk encryption, firewall products, virtual networks, DHCP, anti-virus, HIDS...the list goes on. Now expand this list to include an ecosystem with multiple cloud providers, accounting/chargeback, performance monitoring, and more.
In 2012, enterprises will grow frustrated by vendors that require seemingly endless integration and customization projects that start to undercut the expected benefits for their cloud project. Successful cloud vendors will be the ones with full-featured APIs and support for a broad range of legacy applications and tools out of the box, allowing enterprises to leverage their existing investment and dramatically decreasing the time it takes to roll out a new solution.
4. Back to the drawing board for organizations that adopted a narrow, tactical view of cloud.
Enterprises that have looked to the cloud simply to provide infrastructure automation will find their results disappointing. Many times, these enterprises have made limited attempts to change their IT Ops organization and processes even though cloud operating models demand fundamental shifts in mindsets, skills and tools. This approach may incrementally improve some process steps, but it doesn't address the bigger picture agility benefits of extending a broad portfolio of self-service IT resources to end users within a fully governed environment.
The reality is that many existing operational processes and legacy tools don't meet the needs of the self-service, on-demand world of the cloud. A key problem is that traditional ops processes are vertically siloed around domains, rather than spanning more horizontally across the software lifecycle. In 2012, these IT organizations will go back to the drawing board and rethink their cloud strategy, including new technologies and organizational changes. A variety of options await them, including the possibility of creating new horizontal overlay groups or new Operations Teams that incorporate DevOps and automated configuration/environment management.
5. Application migration to the cloud gets serious... and complex.
To date, most cloud projects have encompassed new greenfield applications or existing apps that are easy to migrate like websites, blog engines, wikis, and simple departmental apps. The advantage is that these workloads are not mission-critical and can survive some hiccups. Now that these efforts have proven successful, companies will pursue the heavy lifting to move more business-critical apps to the cloud.
In 2012, companies will begin developing Reference Application Architectures that set the precedent for how to migrate and optimize more complex applications to the cloud. These application architectures will define how to ensure high availability, security, disaster recovery and other enterprise-level application requirements. Much of this can be achieved in the cloud more efficiently and cost-effectively; however, each organization will need to go through its own learning pathways to make it happen. The use of published reference guides and consultants will help, but ultimately each organization will need to invest with their own significant, hands-on effort.
Here's to a promising and exciting 2012 for enterprise cloud adoption. I'm already looking forward to future success stories.
###
ServiceMesh's Agility Platform enterprise cloud platform enables Global 2000 customers to implement cloud-based “everything-as-a-service” IT delivery models that span internal and external IaaS, PaaS, and SaaS providers to provide quantum improvements in business agility and operating costs.
Published Thursday, December 08, 2011 2:30 PM by David Marshall
Filed under: