Who Is Jason And Why Does He Need A Rest?

In a previous post we read about the definition of an API and touched on the types of interfaces available.  Today we will look at the interfaces specific to web and mobile application development.

This article is part of the  ‘Essentials’ series and focuses on the ground-level understanding required to undertake the journey in to the world of APIs and will through the course of the series build upon your skills and introduce you to core concepts.  This series is intended for people who wish to gain a rudimentary understanding of APIs such as students, new software developers and managers wishing to gain understanding and experience within their team.

One must appreciate that the Internet has been around for a long time and over the years the need to exchange information has always been prevalent.  As the Internet matured and technology advanced the years have seen several implementations and iterations of standards.

In the very early beginning, before the advent of the world wide web, developers had to rely on what is known as ‘socket level programming’ to exchange information.  Here one application would create a ‘server’ in order to receive connections and another application would create a ‘client’ with which to initiate connections.  Today you may still hear of client/server communications which refers to this type of topology.  So long as developers (usually the same developer or team) could agree on the information exchanged between client and server effective communications could take place.

Whilst this is a workable solution and is still used today for several internal communications and short-cuts it does present some challenges.  Firstly the structure of the information must be known and agreed upon beforehand.  Any defect even so much as a typing error could corrupt the data.  Secondly if there were insufficient error handling between client and server this corrupted data could be regarded as valid and create further unintended consequences.

When the HTTP standard was ratified developers had a standard mechanism by which to transfer data between client and server.  The HTTP standard carefully laid out the methods by which to establish communications and the means by which to validate and mitigate errors through the use of response codes.

Of interest was HTTP’s use of verbs which prefixed any instruction for information as follows:

GETUsed to retrieve information from the server. Strictly a read-only operation.
POSTUsed to send information to the server. Although structured the structure does allow for free-form data to be sent.
PUTAlso used to send information to the server. It is similar to POST but has some subtle differences.
DELETEMuch like the GET verb sends an instruction to the server with potentially destructive consequences.

There are a few other verbs used but today out interest lies in above list of verbs.  With these developers had a way to perform CRUD operations which greatly assisted the process of exchanging information.  Moreover the HTTP POST verb, whilst structured, allowed developers to further exchange arbitrary information by attaching a ‘body’ envelope to their request.  This allowed developers to attach binary content such as images, documents and the like to requests.

What I have described here is essentially the internal mechanics of a web form.  When you complete a form and choose a file to upload, for example, that information is sent via an HTTP POST request from the browser to the web server.  The application running on the server interprets this information and performs the necessary functions as intended by the developer.

Although HTTP satisfied the requirements of transporting the data the need to send larger payloads (size of data) grew as applications and technology evolved.  The only suitable place by which to transmit this information was in the ‘body’ envelope which still presented challenges in the structure of such information.  Data was either transmitted in binary which was more efficient but more difficult for humans to read especially during troubleshooting or as clear text which was somewhat bloated but offered easy debugging.

The first most well known standard to overcome such challenges was XML which offered developers a way to structure their information in a way that was readable by moth human and machine.  The language prescribed a set of rules with which to encode a document and in doing so structure the information.

This presented developers with an opportunity to agree on the structure of information and to then outline this structure in what is known as a ‘schema’.  The schema carefully stipulates the names, types and even lengths of fields and its contents.  Given that the data format was now agreed and structured the schema could be used by both sides to validate the data.  This allowed for schema validation to occur to catch and trap errors in the data payload.

SOAP is a standard which mainly governs web services (a form of API) and makes extensive use of XML to provide ‘contracts’ and ‘transactions’ all of which we will learn about in this series.

However there are some who claim that SOAP and its use of XML creates a bloated payload with irrelevant data driven by its rigorous adherence to the schema.  Good schema validation demands that any missing information or information not correct should cause a failure of the transaction but developers wanted a way to send structured data, as needed, in a more lightweight manner for use over mobile networks and other more lean applications.

To this extent JSON (pronounced ‘Jason’) offered developers a format with which data was not only lightweight but preserved the human and machine readable characteristics as well.

It offered developers a ‘self describing’ data format which is vastly different to the XML standard.  With XML the structure has to be known and agreed upon beforehand so that the data could be tailored to that need.  It would need to exist in a particular manner, in a particular location whether nested or not and conform to the schema to be regarded as valid.  JSON on the other hand allowed developers to self-describe their data.  There was little or no syntax required to describe a string, an integer value, an array or even an array of nested objects.  Although we’re rapidly exceeding the basic level of this series it is sufficient to understand that JSON presents a lean means of encoding information for almost the same functionality as that of XML.

Like XML is to SOAP so is JSON to REST which is also extensively used in web services particularly on mobile devices and Internet-of-Things devices.

By now you would have learnt a few things so let us recap:

  1. It is clear that somewhere out there on the Internet there must be a group or division responsible for giving technology whacky names.
  2. SOAP and its use of XML provides developers with a highly structured way to transfer and verify the integrity of data at the expense of larger payloads and more complicated schema development.
  3. REST and its use of JSON provides developers with a lean way to transfer data with the risk of data integrity issues.  There is no ratified standard yet for JSON schemas but several efforts and workable solutions do exist.

In a later series we will discuss when it is most appropriate to select a particular protocol.

Was This Helpful?
All of the content on this site is presented without advertiser support and is produced exclusively by me. If you find any of this information useful please consider it against the cost of a course or book. I gladly accept donations of any amount which goes directly towards producing more quality content and videos for this site.

[wpedon id=119]

 

Written by YourAPIExpert