Saturday 15 June 2013

Dark clouds on the horizon? A fatal flaw in the cloud services ideology

We have all heard of the cloud, right? That fantastic place where you have no server worries, everything remains up, backups are reliable and networks never fail. The cloud is instantly scalable and is cheaper than having your own servers. You get 24 hour, non stop support and monitoring with automatic fault correction.

The magic of the cloud is made possible by economies of scale: In the "old" world, when everyone has their own server(s), companies need to plan capacity for the maximum possible usage, the peaks. This means that on average, the servers are underutilised. The same is true for the network connections and the rest of the service infrastructure including the personnel required to keep it all working.

As different systems have different peak times (performance profiles) the greater the number of systems, the more the peak loads are spread. Let's imagine a system (A) that peaks at 4 "CPU requirements". And another system (B) also peaks at 4 "CPU requirements. In the "old" world, these two systems would each have their own capacity. I.e. 8 CPU "capacity". However, if these two systems peak at different times, and are otherwise idle, then a system of 4 CPUS is adequate to run both systems, provided that the "CPU capacity" can be supplied on demand.

By sharing the resources on demand, we have therefore saved "4 CPUs" of capacity. If the costs are shared between each system, then the costs for each system are simply 2 CPUs (4 CPUs / 2 systems) but they have in reality a 4 CPU capacity. I.e. each system gets its infrastructure at 50% the cost.

Different systems peak at different times, especially when you consider a global capacity that spans the World's time zones. Scale this up to 10 000s of systems (or more) and you have very real savings with an even load (that typically follows the Sun).

This is the fundamental business model of the cloud.

Cloud applications
The logical extension to infrastructure provision is the provision of why that infrastructure is needed: To run software. What's true for infrastructure applies even more so to software. Software costs money. It is expensive to build, it requires expensive IT infrastructure to run and it is expensive to maintain.

Software maintenance is approximately 80% of the cost of software across its lifetime. From bug fixing to feature enhancements, software maintenance is an going, costly business.

Just as with cloud IT infrastructure, spreading the costs of software across a large user base creates economy of scale: The same software can be utilised by many independent users as opposed to each user developing their own copy.

It is therefore not surprising that with the very real cost benefits of software as a service (cloud applications), this is a fast growing sector in the IT landscape. Users simply get to use the software, exactly as you would a utility such as electricity. It automatically scales to requirements and issues such as backups and uptime are taken care of.

Dark lining
There is, however, a fatal flaw in the cloud paradigm. Let's imagine a system that provides sales information. This system makes data available to other systems, such as accounting, invoicing and management reporting (MI).

Changes to the Sales system therefore need to be carefully planned. It is unlikely that an upgrade to the Sales system would be permitted, for example, during year end preparation as an error in the feed to the accounting system could have a severe business impact on the Year End activity.

IT departments typically need to carefully plan changes to integrated systems taking not only the changes into consideration but also the impact of these changes in the wider ecosystem. Scheduling upgrades are as much a business decision as they are a technology decision.

Grey applications
Most IT services are integrated. Typically this is via a published API or by the provision of feeds to other systems. This integration is usually multi-level, with systems integrated with the source system themselves integrating into a wider ecosystem and so on. In our example, the invoicing system might itself provide feeds to an invoice printing service and also a VAT accounting system. The VAT system could, in turn, interface to the public service provided by the UK government's HMRC department, a third party, independent system that is itself part of a very large IT ecosystem and integral to the entire British economy.

These direct integrations are typically well defined and documented within IT departments.

However, a data warehouse that pulls data from the Sales system, the invoicing system and the MI system described in our example, could expose data internally to the company via an API. For example, an accounts team may have spreadsheets that directly access the data warehouse and produce cash flow analysis. These spreadsheets could be freely propagated amongst the accounting department and provided that the users pass authentication and security, they will have access to the data. Moreover, individual users may create further spreadsheets accessing the data warehouse information. These additional spreadsheets are adhoc in nature, they are not a part of the core IT service but are valuable tools to other parts of the business.

These spreadsheets are an example of grey applications: Applications that are legitimate and authorised usage of the IT infrastructure's services, but are 3rd party to that infrastructure.

Grey applications are a significant class of software and IT usage in most organisations.

Now consider the implications of the Sales system being updated such that sales totals are no longer inclusive of VAT. Whilst this change would enable the MI reporting system (which is a part of the core infrastructure and hence known to the IT department) to be upgraded so that it continues to correctly produce its reports, the impact on grey applications is unknown.

Amplification
Grey applications can themselves be data sources for other systems which in turn can act as sources for other ecosystems. This is a tree of dependencies. I.e. it is exponential.

This situation is caused by making services and data available by public APIs, the very strength of the cloud. In fact, cloud services are designed to be "plugged" into other services and software to enable users to build complex IT solutions at commodity, off-the-shelf prices. Many successful cloud applications today are specifically designed to be the foundations upon which an entire IT infrastructure can be built.

I.e. They are built specifically to be upstream systems. They are directly connected as well as grey applications enablers. In fact, it can be argued that in the cloud, ALL downstream services are grey applications.

Now consider upgrades to cloud applications.

Whereas previously the IT department worked with the business to schedule changes to key systems at times of reduced risk to the business (our year end example) and planned and scoped changes with the entire ecosystem in mind (our connected systems example), with cloud services and a vast, shared online audience, this is simply not possible.

By their very nature, cloud systems cannot behave as internal IT departments do, they cannot liaise with and work in concert with their end users and the associated business users to plan, schedule and participate in upgrades or changes.

Risk mitigation strategies are not just unavailable, they are simply not a part of the solution.

This means that the impact of any fault with upgrades or unintended consequences of changes (e.g. sales figures no longer including VAT) propagate downstream which exponentially increase in number of systems within a few short levels of the connection tree. I.e. unintended consequences and faults are amplified by the downstream connected ecosystem.

Moreover, these changes can occur at business critical times, maximising the impact on businesses should things go wrong. In fact, given the very nature of the global reach of cloud solutions, it is almost guaranteed that the changes will be during business critical times (that would not have been scheduled by internal IT departments) for a proportion of clients.

I call this mechanism amplification because it multiplies up in scale the impact of changes.

A previous global example
The global financial crisis that began in 2007 with a credit crunch was the result of debtors in a few states in America having repayment difficulties. The connected nature of the world's financial system meant that over a two year period, what began as isolated incidents in the USA compounded into a global crisis that threatened the existence of the Eurozone and affected every nation on the planet.

In the cloud, time is measured in microseconds, not months or years.

Feedback into the loop
Problems that propagate in the highly interconnected world of the cloud means that they can return back into the system. The connections online do not form a straight line from a source system through a series of intermediate systems and terminating in end systems. There are many interconnections along the way. This means that a fault that propagates from a source to downstream systems can at some point also receive input from a system downstream to one of its own downstream systems. I call this feedback.

As cloud software adoption continues, the interconnections increase exponentially and feedback becomes more likely. This means that minor faults, through feedback, can evolve into major faults.

The combination of feedback and amplification means that not only are faults amplified, but become more severe.

Development slowdown
As cloud services gain users, upgrades become more risky both in terms of direct risk of failure but also in terms of unintended consequences. Increasingly online software vendors will need to be aware of, and plan for grey applications, amplification, feedback and unintended consequences.

If providers are not going to risk disaster for their clients, they will need to take steps to protect against faults and unintended consequences. That means, for example, that instead of changing an API (e.g. version 1 -> version2), the provider will run both versions in parallel, maintaining and supporting both. The same principle will have to apply to key services, data schema changes and other normal changes to software over its lifetime.

This means that software maintenance becomes more complicated, requiring more resources, changes are bigger projects to implement and testing requirements increase.

Conclusion
It may appear from this posting that cloud services are a bad idea or a national or global threat. However, cloud services are enablers, commoditising IT services and enabling users who previously simply did not have the infrastructure or could afford them, to benefit from advanced, strategically important, heavyweight IT solutions.

The cloud is also highly competitive, forcing solutions providers to constantly innovate. This represents a serious IT spend benefit for clients without having to actually spend the money.

The principles of amplification and feedback don't only apply to faults and negative effects of changes, these same mechanisms serve to multiply the benefits of improvements and progress, delivering great return on investment and providing ever increasing leverage.

Cloud services are here to stay and will continue to grow both in capability and strategically. They are and will change every aspect of our lives, both private and corporate.

We therefore need to add the concepts of grey applications, feedback and amplification to our vocabulary, be aware of them, propagate these ideas and plan for the events and implications that they warn of.

With suitable planning, awareness, inclusion in education and possible regulation, we will be able to use these principles to leverage cloud advantages while defending against the potentially catastrophic consequences of blindly stumbling down the cloud path unaware of this new IT reality that we have created.

The cloud has changed the computing landscape. We need to ensure that we have a suitable understanding of this new land.

Monday 15 March 2010

Endless loop in smartGWT listgrid retrieving data from REST back end

So you have written your DataSource, instantiated your ListGrid and tested the calls to your back end REST (e.g. Rails controller) in your browser. You know that the back end is delivering the data in correct JSON, but yet your smartGWT is looping endlessly, loading the data.

The console shows no errors but loads of RPC calls.

Check your record XPath in your DataSource.The value given for the call to setRecordXPath must match precisely the JSON data path. If you get this wrong, smartGWT fails silently, looping forever attempting to get the records and it knows from the meta data should be there!

Sunday 28 February 2010

Displaying lookups from other tables in ListGrid

Here's a problem in smartGWT: You have a ListGrid that displays data from a table table. One or more columns are integer IDs that refer to other tables. Those tables have meaningful text in them (e.g. code-description tables or perhaps account or client names).

Let's assume that you are displaying client data. Rather than displaying the internal row ID for the client, you want to display the client name using the foreign key ID on your ListGrid data to look up the name in the client table.

This is easy in smartGWT. You need to use a facility called "OptionDataSource". The smartGWT documentation says that you need to set OptionDataSource on the column in question.

Now the smartGWT documentation leads you up the gardent path a bit with using this facility. Put bluntly, their method doesn't work when you have a ListGrid that is based entirely on a datasource. This is because you cannot use ListGrid.getField(...) to get the column in order to set OptionDataSource because, until it's displayed, the ListGrid is empty. So you continually get an error trying to set attributes on a null!

To work around this problem. smartGWT has ListGrid..setUseAllDataSourceFields. If you set true for this, this means that you can now create one or more ListGridField types, call setOptionDataSource on these new objects as well as setValueField and setDisplayField and then, crucially, at the end of all this, you call ListGrid.setFields to add these to the ListGrid.

Here's the part that is very unclear in the smartGWT documentation: You need to use setName to name your new objects to the same name as the columns in the ListGrid's datasource that you want to set the OptionDataSource values for. What happens is that smartGWT will then use your new objects to overwrite the objects in the datasource, i.e. you are replacing those columns with your own versions of those columns.

The part that is really not clear in any documentation or forum entries is that this is how you are establishing the mapping between the column in the ListGrid datasource that has the ID (in our example, the client ID) that you wish to display the meaningful text for from the alternative table (OptionsDataSource). In our example this is the client's name.

Specifically:
   ListGridField.setName: Set to the ListGrid datasource column that you wish to implement the lookup on.
   ListGridField.setOptionDataSource: Set the alternative datasource that contains the lookup data (must have the code and the displayed value)
  ListGridField.setValueField: Set the column in the alternative datasource that has the code values that match those in the ListGrid column that you specified in setName above (in our example, this is the client id column of the lookup table). After you have set this as well as setName above, smartGWT now has both column names that are used to lookup the data.
  ListGridField.setDisplayField: This tells smartGWT what column to use to display instead of the ID (in our example, this is the client name column of the lookup table).

Here is some sample code that implements a lookup of Group Name based on a group_id in the ListGrid datasource. The ListGrid object is recipientGrid and the lookup datasource is GroupHeaderDS.

        ListGridField groupID=new ListGridField();
        groupID.setOptionDataSource(GroupHeaderDS.getInstance());
        groupID.setValueField("group_id");
        groupID.setDisplayField("group_name");
        groupID.setName("group_id");
        groupID.setTitle("Group Name");
        groupID.setAutoFetchDisplayMap(true);
        recipientGrid.setFields(groupID);
        recipientGrid.setAutoFetchDisplayMap(true);

Thursday 18 February 2010

smartGWT - how to implement a date and time picker/chooser

SmartGWT is severly lacking a nice date/time picker/chooser. So for now, you need to use a work-around. The solution is to use the date picker in combination with a suitable method of selecting time. The standard TimeItem in smartGWT is particularly shoddy and user-unfriendly.

What you want is a spinner for the hours, a spinner for the seconds and a decent defaulting to today's date and time.

Well, fret no more, here is a solution. For the purposes of this example, we are implementing a send timestamp where the user selects a date and time to send an item. It gets placed on a dynamic form item so that it can be added to a layout.

Note that I set the borders here so that you can see the way that it's laid out. You will want to remove those lines in the real world.

  final DynamicForm scheduleForm = new DynamicForm();
  DateItem sendDate  = new DateItem();
   sendDate.setDisplayFormat(DateDisplayFormat.TOSERIALIZEABLEDATE);
  sendDate.setEnforceDate(true);
  sendDate.setRequired(true);
  sendDate.setInputFormat("YMD");
  sendDate.setTitle("Send at:");

  // This part sets up the spinners for choosing a time to send at
  // We need to default it to now
       
  Date rightNow= new Date();
  int hour= rightNow.getHours();
  int min=rightNow.getMinutes();

  SpinnerItem sendTimeHr = new SpinnerItem();
  sendTimeHr.setName("sendTimeHr");
  sendTimeHr.setMax(23);
  sendTimeHr.setMin(0);
  sendTimeHr.setTitle("Time:");
  sendTimeHr.setWidth(2);
  sendTimeHr.setDefaultValue(hour);

  SpinnerItem sendTimeMin = new SpinnerItem();
  sendTimeMin.setName("sendTimeMins");
  sendTimeMin.setMax(59);
  sendTimeMin.setMax(0);
  sendTimeMin.setTitle(" ");
  sendTimeMin.setDefaultValue(min);
       
  scheduleForm.setNumCols(6);
  scheduleForm.setWidth(414);
  scheduleForm.setBorder("2px solid black");
  scheduleForm.setCellBorder(1);
  scheduleForm.setFields(sendDate,sendTimeHr,sendTimeMin);

Getting current time in GWT

The Calendar class isn't supported in GWT because the Java cannot be compiled into JavaScript.

So for now, you need to use the deprecated Date class in Java:

  import java.util.Date;
  .
  .
  .
  Date rightNow= new Date();
  int hour= rightNow.getHours();
  int min=rightNow.getMinutes();

I am still trying to find a better supported way to solve this problem, but for now, deprecated code seems to be the best solution available.

Wednesday 17 February 2010

How to add a Date and Time popup to smartGWT

There are a lot of queries about how to add a popup for selecting time and date in smartGWT. The bottom line is that smartGWT is lacking a single widget to do this, so you have to use a DateItem and TimeItem together.

Here is some sample code that adds these items to a VLayout call leftPanel:

 final DynamicForm scheduleForm = new DynamicForm();
 DateItem sendDate  = new DateItem();
 TimeItem sendTime = new TimeItem();
       
 scheduleForm.setFields(sendDate,sendTime);
 leftPanel.addMember(scheduleForm);

Saturday 30 January 2010

smartGWT MenuBar - this.menus is undefined error

When you are attempting to add a smartGWT MenuBar to your screen, usually at system startup. If you get an error like this:

  com.google.gwt.core.client.JavaScriptException: (TypeError): this.menus is undefined

then check that you have used setMenus in when you first instantiate the menu bar. For example:
            MenuBar menuBar = new MenuBar();
           
            menuBar.setMenus(myFirstMenu,mySecondMenu);
           
            menuBar.setVisible(true);
            menuBar.setKeepInParentRect(false);

            RootPanel.get("menuPanel").add(menuBar);       


It is possible to use the add functions to add members and menus, but the underlying JavaScript needs a setMenu call to have the array that stores the menus initialised.