The Internet of Things – is it all over the top?

Around 10,000 years ago, humans started to live in settled communities. They became farmers and established an enduring connection between mankind and nature. That connection is at the forefront of the Internet of Things.

untitledMeet Bob Dawson. Bob has driven tractors and combine harvesters for nearly 40 years. For the last three years, he has driven both vehicles at the same time. He sits in his combine harvester while it is steered via GPS. On-board applications measure the crop yield in real-time passing information back to analyse for future sowing and spraying. The driver-free tractor is being driven by the same application as in the harvester cab. The trailer’s sensors monitor when it is full and the harvester automatically switches off the grain chute and tells the tractor to take the trailer to a waiting truck. Precision agriculture is with us and the Internet of Things (IoT) is at its heart.

What is the relationship between the IoT and Over The Top (OTT) applications, and what are the implications of IoT for customers, for network operators and for society in general?

There are four umbrella elements of the IoT:

  • Devices – A truly interconnected world could have hundreds of billions, even trillions, of devices. For example, every piece of packaging for every prescribed drug, every food wrapper. Every few metres of every stream in the world could have its own device in the water enabling analysis of water levels, quality, climate and sustainability.
  • Connectivity – for many IoT applications, the connectivity requirements are ubiquitous coverage, low unit costs, low data rates over many years. There are other applications that require high speed, low latency and massive bandwidths. There are many fragmented alliances and consortia and, if some succeed, it could be a crack in the mobile operators’ defences as the gatekeeper for business-grade mobile connectivity.
  • Applications – Uber has become a wellknown OTT taxi-hailing application. But Uber has bigger goals – to remove the need for people to own, or even drive, cars and to remove the need for towns to build any more car parks, where cars sit doing nothing all day while their owners are at work. Applications are often seen as the place to be in the value chain, as they are perceived to be where the value flows to. Barriers to entry are small – it needs network connectivity to run but does not require the negotiation of a direct relationship with the network operator.
  • Analysis – the volumes of data produced by IoT devices and applications, combined with unstructured, qualitative data such social media feeds, means that “data science” is a critical skill. The automated nature of IoT means that much of the interpretation will itself be done in an algorithmic, or possibly more “neural network learning” way by machines themselves.

What are the implications for network operators? Consumers are more than willing to purchase applications and services direct from third-parties minimising their dealings with fixed and mobile operators. IoT could extend this separation dramatically. Operators therefore will have to build networks and carry data packets in such a way that unit costs falls more quickly than the price they can charge. At the same time, operators have vast amounts of network data and customer / device data. They will have to develop their own data analysis skills, both to improve their own business and to sell insight-based services to others.

And the implications for wider society? If an individual driver has a car crash, then that driver might learn for next time. If an autonomous Tesla car has a crash, then all Tesla cars in the world can learn for next time. A world in which high-quality interconnected networks enable new applications and services to launch rapidly to reach and connect consumers, citizens and devices over the top of those networks ought to be a good thing. At the same time an interconnected network is only as secure as its weakest connection. It can be hacked.

IoT has the potential to become embedded in almost every aspect of society and so its adoption raises questions of balance between individual, social, political and economic goals. Solving these is likely to be a series of steps and iterations – rather like a human version of a self-learning network.

This is a summary of a full article which appeared in The Journal, December 2016. To access the article in full visit the ITP website (free for members).

Back in the day…

The good old telephone service has gone though many changes during its lifetime but perhaps the most significant was the move from analogue to digital, reflects Professor Nigel Linge.

Naturally, the human voice is inherently analogue but transmitting it as such makes the untitledresulting electrical signal susceptible to the impact of noise and attenuation, leading to a reduction in overall voice quality. However, in 1938 a radical alternative technique was proposed by Alec Reeves, who was working at International Telephone and Telegraph’s laboratory in Paris.

Reeves proposed that the analogue signal should be sampled at regular intervals, with the amplitude of the voice signal being converted into a binary number and then transmitted as a series of electrical pulses. So long as these pulses could be detected at the receiver, the original analogue voice could be reproduced without degradation. Known as Pulse Code Modulation (PCM), Alec Reeves was awarded French Patent No. 852 183, on 3 October 1938 for his ideas which in effect heralded the dawning of the digital age. Unfortunately, as is often the case with pioneering ideas, the technology of the day was not capable of realising the complexity of PCM.

In fact, it was not realised until 1968 when the GPO in Britain opened the world’s first PCM exchange which was the Empress telephone exchange near Earls Court in London. This was the first exchange of its type that could switch PCM signals from one group of lines to another in digital form and it laid the foundations for the more widespread use of digital switching that now sees PCM at the heart of our fixed-line, mobile and IP-based telephony along with all our digital audio systems.

At the other end of the scale, and seemingly trivial in comparison, BT changed the way domestic telephones were connected to their network on the 19 November 1981 with the introduction of the plug and socket interface. Up until this time the telephone in your home was permanently wired to the BT network which meant that connecting a computer to the phone line could only be achieved using either an acoustic coupler or via a telephone with an integrated modem such as the Type No13A Datel modem set. The best speeds that could be obtained with such systems was typically 300bit/s. However, the introduction of the plug and socket interface in 1981 changed all of this. The telephone service provided by BT was now terminated in a ‘master’ socket into which the customer could plug their own phone.

More importantly, this meant that there was now a direct electrical connection to the external phone line which provided a more efficient mechanism for connecting a computer via a modem. In 1988 the V.21 modem increased speeds to 1.2kbit/s; in 1991 this was extended to 14.4kbit/s with the V.32 modem; and ultimately in 1998 speeds reached 56kbit/s with the V.90 modem. Thereafter the introduction of Digital Subscriber Line technology led directly to today’s superfast services and all thanks to introduction of a simple socket.

Today with 15 per cent of UK households now officially declared as mobile only, there is a slow but growing trend away from traditional fixed-line telephony. An important step on that journey was made on the 14 December 2009 when the Scandinavian telecommunications company, TeliaSonera, became the first operator to commercially launch a publicly available LTE (4G) mobile network. Back in 1981 Scandinavia had led Europe into the mobile era and now in 2009 it was leading the world into 4G deployment with services opening in the central parts of Stockholm and Oslo. The network infrastructure was provided by Ericsson in Stockholm and Huawei in Oslo and was initially targeted at mobile broadband customers using a Samsung provided LTEonly USB dongle. Proper 4G handsets took a little longer to materialise but once again Scandinavian companies led the way in Europe when the Samsung Galaxy S2 LTE became available to their customers in 2012. Later that year the UK witnessed the launch of its first 4G network. Today there are over half a billion 4G subscribers across 151 countries with, interestingly, the UK now cited as offering some of the highest average 4G download speeds in the world.

Mobile consolidation

For most of the past 20 years, competition authorities in mobile markets have focussed on securing the entry of additional competitors. Markets with high entry costs, like mobile telecoms, would not be expected to accommodate a very large number of firms but some degree of competition between a number of firms – more than one but not many – delivers better results.

Entry into mobile is restricted by the availability of radio spectrum. Opportunities arise by releasing spectrum from broadcasters or the military. But later entrants then face the challenge of competing with established firms. This was fine when demand was growing but today later entrants have to compete for existing customers of established operators.

But by the time of 4G (after 2010), interest in entering mobile markets had largely evaporated. The amount of new spectrum available was more limited and there had been some poor commercial returns by new entrants to 3G. Rather than promoting entry, 4G was driving the market towards consolidation. Firms like Hutchison were reluctant to invest in 4G when they had struggled commercially with 3G. Other firms, like Telefonica, felt that they could deploy their capital more profitably in emerging markets in Latin America.

Pressures on later entrants increased following global financial crisis after 2007. Moreover, by 2010, operators were also feeling the effects of competition from over-the-top applications such as WhatsApp and tighter regulation of international roaming charges and interconnection rates. But the main driver of consolidation was simply that late entrants found themselves unable to achieve sufficient scale to be profitable within a single technology cycle.

Rather than winning customers from rivals, the other way to achieve both scale and cost savings is through mergers with rivals. These savings mean that a rival operator can invariably offer a higher purchase price for the asset than a buyer that does not already have operations in the market. The other option for sub-scale or unprofitable firms was to exit the market by selling out to another party who is outside the mobile market. An example of this was EE, which was acquired by BT.

Consolidation can provide an escape route for sub-scale firms but it has less obvious benefits for consumers. The European Commission is concerned that prices will not be as low after the merger as they might otherwise have been. The advocates of mergers claim that, in the longer term, the cost savings from combining assets and operations will offset some of the upward pressure on prices that might otherwise be associated with a reduction in the number of firms. The Commission has generally rejected these efforts, finding that any savings will more likely bring higher profits for the owners of merging firms rather than being passed on as lower prices.

The other and more interesting claim relates to future investment. Advocates of mergers claim that the merged firm will be better able to invest because of its greater scale and/or higher levels of profitability – claims that are extraordinarily difficult to assess.

Consequently, competition authorities have only been prepared to approve mergers if the parties are also prepared to take various steps to replace the firm that is exiting with another entrant. But why, when one set of investors were seeking to exit, would another set be persuaded to enter? The predictable lack of interest has prompted authorities to promote various models which would allow new firms to enter the market at lower cost and risk by using the merged firm’s existing network as a Mobile Virtual Network Operator for an extended period.

The most interesting and immediate question is what happens to those firms whose merger plans have had to be abandoned. Do they sell to a party from outside the market at a lower price? Do they find another way to grow or to become profitable? In Europe, such firms can pursue the ‘failing firm’ defence, arguing that without a merger the firm will exit the market altogether.

The current debate reveals how little we actually understand about what determines the performance of these markets. We know monopolies are generally to be avoided, but we know very little about what might ensure higher levels of investment, how these investments might translate into prices, quality or other outputs which consumers care about.

This is a summary of a full article which first appeared in The Journal, December 2016. To read the article in full please visit the ITP site (free for members). 

 

 

Lateral security: networked immunity everywhere!

At a modest estimate the ‘Dark Side’ should be overpowered by the ‘Good’ of the computing world by at least 3000:1. But how well is the ‘Good’ doing? Peter Cochrane explains…

Our governments, companies, banks, institutions and security services are more than a match for the rogue states, organised crime, hacker groups, and those lone sharks huddled over screens in a multitude of bedrooms. The Good has more manpower, compute power, facilities, knowledge and money by a huge degree, and yet the Dark Side continues to prosper! How come?

It is all down to the power of networking. One side operates in a secret ‘need to know’ mode whilst the other is of necessity ‘need to share’ – it is as simple as that. The Dark Side are the ultimate networkers and sharers, and the magnification effect is exponential.

So what can the Good do to win? Firewalls don’t work and malware protection is always after the fact, a band aid applied to a known and already serious threat. The Good are also slow to detect incursions and even slower to respond, in effect, always on the back foot. We need to be pro-active, fast and anticipatory; then and only then can we hope to turn back the tide of the Dark. If we do not, we are already hatching a new and far worse nightmare called the ‘Internet of Things’ – or more correctly, ‘Clouds of Things’. The potential risks are obvious and the solutions non-existent. Today’s design and build of the IoT is so badly flawed it is bound to end in tears.

We probably have one big shot at creating an effective defence mechanism. This is founded on the established biological principles of white cells and auto-immunity.

Building hard and soft malware traps into every chip, card, device, shelf, rack, suite, room, building and network will cure the problem. The automatic detection and isolation of malware, followed by removal and destruction is a necessity because people cannot do it; this appears to be the only response likely to disrupt and put the Dark Side on the back foot. If the organisations and people of the Good will not network and share, then their hardware and software has to do it for them.

Is such a proposition viable? Some big players are looking at it already and the hardware and software overhead appears minimal. And so we might conjure a number of future scenarios, but the most iconic goes something like this. A man walks into a coffee shop with an infected mobile which tries to infect everything on WiFi and Bluetooth that is in range.

But these devices recognise or suspect an attack and isolate the infected device. They then collectively search out the ‘antidote malware remedy’, and upload it to attack the infection. Once confirmed as clean, the mobile device is accepted back into the community and allowed to connect and communicate.

This might all sound complex and cumbersome, but it turns out not to be so and such detection and immunisation cycles can occur in seconds unnoticed by the human owner. Better still, we no longer need to get involved in security as individuals; displaced by machine intelligence we are left to get on with what we do best – creating, solving, building and changing. Where does the ultimate responsibility then lie? The producers and supplier of hardware and software have a new product line, service and responsibility

Of course, the Dark Side will try to subvert all this, but by then it could be ‘game over’ and too late. I just hope the Good get off the grid and cross the winning line really soon!

This article first appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

Dr Peter Cochrane, OBE, BSc, MSc, PhD, DSc, CGIA, FREng, FRSA, FIEE, FIEEE.peter

Peter is an entrepreneur, business and engineering advisor to international industries and governments. He has worked across: hardware, software, systems, network, adaptive system design and operations. He currently runs his own company across four continents, is a visiting Professor at Hertfordshire University and was formerly CTO at BT and has received numerous awards including an OBE and IEEE Millennium Medal.

Back in the day

From murder most foul to Integrated Services Digital Network and 3G, January, February and March have seen some extraordinary highlights in the telecommunications world, says Professor Nigel Linge.

On 1 January 1845 John Tawell entered a chemist’s shop and bought some Scheele’s Acid, a treatment for varicose veins that contained hydrogen cyanide. He travelled to Salt Hill near Slough where he met his mistress, Sarah Hart, whom he then proceeded to poison with the acid. Sarah’s screams and cries for help were heard by a neighbour but John ran off and made his way to Slough railway station where he boarded the 7:42pm train to London Paddington. Unfortunately for John, Slough and Paddington stations had been fitted with a Cooke-Wheatstone two needle electric telegraph system.

Sarah Hart’s neighbour raised the alarm and the local vicar pursued John to the station where he asked the Station Master to signal ahead to Paddington and alert the Police. However, the telegraph system could not send the letters J, Q or Z which created problems because the vicar said that John was dressed like a Quaker! The word Quaker had to be sent as Kwaker which caused the telegraph operator at Paddington great consternation in understanding it; even after retransmission. Eventually the message was handed to Sergeant William Williams who was given the task of tracking down and arresting John. At his trial, The Times newspaper reported that, ‘Had it not been for the efficient aid of the electric telegraph, both at Slough and Paddington, the greatest difficulty, as well as delay, would have occurred in the apprehension’. John Tawell was hanged at 8am on Friday 28 March 1845 and thereafter became known as ‘The Man Hanged by the Electric Telegraph’.

tel2The electric telegraph was the country’s – and the world’s – first data network. Of course data in that sense was the written telegram and the network never reached into our homes. However, with the emergence of the home computer and our onward drive towards digitisation came a demand for public data networks for both business and domestic consumers. In response to this on 7 February 1991, BT launched their Integrated Services Digital Network (ISDN) service. Originally developed in 1988 by the CCITT (ITU), ISDN provided a digital connection comprising two symmetric bi-directional data channels (2B) each operating at 64kbit/s and a 16kbit/s signalling channel (D). This basic rate 2B+D service offered much higher data rates over its competitor technologies and proved especially popular with the broadcasting industry where the guaranteed data rate with its low latency was ideal for high quality voice and music transmission.

BT developed and marketed its ISDN service as Home Highway but in 2007, withdrew it from domestic customers because of the rise in the popularity and capability of xDSL broadband access. As of 2013 there were still 3.2 million ISDN lines in the UK but this number is falling year on year. Within Europe, ISDN was most popular in Germany where at one point they accounted for 20% of the global market.

Delivering data into the palm of your hand offered a different challenge but took an important step forward on 3 March 2003 when the first mobile network to offer a 3G service was launched in the UK by a new entrant into the mobile marketplace. Telesystem International Wireless (TIW) UMTS (UK) Limited had the backing of Hong Kong based Hutchison Whampoa but soon Hutchison bought out TIW to create H3G UK Limited which, having acquired spectrum from the infamous UK 3G auction, marketed its new service under the more familiar ‘Three’ brand. Choosing to launch their service on 3/3/3 was therefore an opportunity not to be missed! Quite how much network coverage was available at that time remains a point of conjecture. Nevertheless, the UK had entered the 3G world with the first public 3G call being made by Trade and Industry Secretary, Patricia Hewitt, who called Stephen Timms, Minister for e-Commerce.

tel3
Three launch devices: NEC e606, Motorola A830 & NEC e808

The move to 3G brought with it the promise of higher data rates and at the time of launch, Three offered its customers a choice of three different handset options, the Motorola A830, NEC e606 and NEC e808. As is often the case, these first generation handsets were actually poorer than their predecessor technology being bulkier and suffering from poor battery life. That aside, by August 2004, Three has connected one million customers.

This article first appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

Packet versus voice switching

The prevailing view is that moving voice to an IP solution gives many advantages, especially cost. What are the economic arguments for switching voice calls in packet rather than circuit mode, putting to one side the technical and quality-of-service issues?

Some inter-related themes form the big picture of telecommunications networks today:

• The big architectural difference between circuit and packet-switched networks is the location of the service control – within the network with circuit-switched, at the edge with packet-switched. Control at the edge and use of an essentially dumb packet network enables ‘over the top’ service providers, such as WhatsApp, Skype, FaceTime, to have the commercial relationship with the users.

• There is a growing sentiment that always-on access to the Internet is a basic human right. Unfortunately, there is a tendency also to devalue content, insofar as people do not care to have to pay for it.

• Many network operators are considering how best to replace their circuit switches forming the PSTN. Replacement by IP systems has proved difficult and many operators are waiting to take advantage of the shift of voice away from the fixed PSTN onto mobile. Despite this, the fixed PSTN is still essential in most countries as a network of last resort and for interconnection.

On the question of whether packet is cheaper than circuit switching for voice, there are several points to consider:

  • Switching system costs – Many would say that packet switching is cheaper – after all, many supported services are free. Often overlooked though is the price users pay for the infrastructure supporting voice over Internet Protocol (VoIP) – the computer/tablet/smart phone, broadband access and Internet Service Provider service, etc. Therefore, the question is whether there is any inherent cost (as opposed to ‘price’). Interestingly, there is remarkably little difference between the elements of a circuit switch-block and those of an IP router; both usually comprise time-space-time switch with similar semiconductor technology. So, apart from differences in the costs of signalling, the inherent costs are essentially equal.
  • Terminating functionality – A profound influence on all network costs is the location of the interfacing equipment. Terminating a line on the exchange represents some 70% of the total cost of the switching system, costs incurred by the network operator. However, for VoIP services, the analogue-to-digital encoding, packetisation, powering, and ring-tone generation is in the users’ devices (a computer or tablet), the costs of which are borne by the user giving a cost advantage to VoIP providers. However, if a fixed operator hopes to use VoIP to replace its circuit-switches and if many of its users wish to keep their telephone and line, the operator will have to provide the terminating functions at the boundary of the packet switching system. Mobile operators do not have this concern as mobile handsets provide the functionality.
  • Multi-service platform – A single all-purpose platform supporting all services has long been seen as a way of saving capital and operational costs. This advantage is true with any technology, not just IP (indeed, earlier multiservice platforms were circuit-based).
  • Bearer traffic loadings – The potential loadings of 85% or higher with packet networks compare favourably to about 70% in circuit-switched networks. However, such loadings on packet networks are avoided to reduce the probability of packets being delayed, particularly for latency-intolerant services such as voice. Lo 30% is typically required to ensure voice quality, so reducing any cost advantage of packet switching.
  • Industry economies of scale – There is a general move towards the use of IP technology for networks as there is with computer-controlled digital electronics in general. Since vendors’ prices are driven by economies of scale, today’s prices of packet switching benefit from this shift – which rather makes the economics of circuit versus packet switching a self-fulfilling prophecy.
  • On a like-for-like basis, therefore, there is no inherent cost difference between circuit and packet-switching technology. However, packet can benefit from the shift of the user network interface and the move to a multi-service network. The enthusiasm of fixed operators to move to VoIP has slowed because of the need to support existing fixed subscriber lines. The UNI location is not an issue for mobile networks; existing 4G networks will shift to all-IP architectures as the existing circuit-switched mobile exchanges are withdrawn.

This is an executive summary of the full article which appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

The need for synchronisation in telecommunications

Synchronisation in telecommunications largely arose from the introduction of 64kbit/s digital switching and transmission of voice telephony. For digital switching to work, it is critical that the sampling, coding, multiplexing and switching all occur at exactly the same rate. If samples arrive at the switch more often than they can be written then at some point a sample must be thrown away; and if samples arrive less often then at some point the same sample will be repeated. Either of these will result in an audible disturbance. Hence each piece of equipment in each exchange must run at the same rate (or frequency), and from this arises the fundamental need for synchronisation.

Rather than providing separate infrastructure for distributing synchronisation, the Primary Rate bit-stream at 2048kbit/s was adopted. However these bit-streams do not have a wholly repetitive and predictable pattern of edges from which to recover the reference frequency. Such short-term variations (or jitter) in frequency that accumulate along a chain of nodes in the hierarchy are smoothed out by means of a high quality oscillator and a long time-constant phase locked loop.

Increasing traffic demanded more capacity which was met, in part, by multiplexing the Primary Rate bit-streams into higher and higher bit rate transmission paths. However, these Primary Rate signals may not be synchronous with each other and, to cater for the differences, the old Plesiochronous Digital Hierarchy (PDH), used a technique known as justification whereby dummy bits are either added (or not) to equalise the bit rates. This approach means that the synchronisation borne by a Primary Rate input signal is carried transparently and independently in the multiplex (although some jitter may be introduced by the justification).

Synchronous Digital Hierarchy (SDH) multiplexing largely replaced PDH. SDH uses a byte-interleaved scheme to multiplex and cross-connect the payloads of the SDH signals. However, it cannot be assumed that the payloads are synchronous with the overall SDH frame, and even other SDH frames may not be synchronous (for example they may originate within another operators network). To cope with this, SDH has a justification method in which a pointer is included in the overheads to indicate the start of the payload within the frame, allowing the payload to ‘float’ within the SDH multiplex structure. SDH still uses a 125 microsecond frame, and so the synchronisation rate and interfaces are carried over from PDH.

The emergence of IP telephony and softswitching had been hailed as the beginning of the end of the need for synchronisation. However, cellular mobile networks had quietly been taking advantage of existing synchronisation infrastructure for a quite different purpose, and is now probably the dominant user of frequency synchronisation.

Digital cellular base stations have tight limits on the frequency of their carriers and frame repetition rates in order that mobile devices can successfully decode signals from different nearby base stations and seamlessly move between them during calls. Early base stations achieved this using costly high stability oscillators. However, the transmission to these base stations uses the same Primary Rate signal as described above and this is used as an accurate frequency reference. The signals do suffer from jitter but this can be smoothed out by a relatively inexpensive oscillator locked to the incoming transmission.

More recently, base stations began using Ethernet-based transmission; it is mandatory for 4G and now common for 3G and 2G as well. Therefore the delivery of synchronisation was incorporated into the Ethernet standard to create Synchronous Ethernet (SyncE). An alternative to SyncE is Precision Time Protocol (PTP), a method for transferring time over packet networks. PTP has some advantages over SyncE in that it can be implemented on pre-existing Ethernet networks but is sensitive to packet delay variation.

The LTE-Advanced base station air interfaces will require alignment in time (with submicrosecond accuracy) as well as frequency so that mobile devices can sort out the signals from different base stations. Time synchronisation can be achieved via the transmission or from an off-air source such as GPS. None of the available methods is without challenge and much of current work on synchronisation is focused on solving these.

This is an executive summary of the full article which appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

Time from the sky

When Journal contributor Charles Curry first started using the US-based Global Positioning System (GPS) in the mid-1980s in the oil industry, there were only seven GPS satellites giving just one hour per day when you could get a fix. Nowadays, nearly every smart phone and tablet has a multi-constellation 30-channel Global Navigation Satellite System (GNSS) receiver embedded.

GPS was conceived in 1973 by the US Department of Defence to address defence navigation and the first experimental satellite launched in 1978. Today there are 31 operational GPS satellites orbiting with 24 needed for full coverage. But GPS is not alone. The Russians created their equivalent as did Europe, China, Japan and India with varying degrees of coverage and maturity.

So how does a navigation system disseminate precise time? All GNSS satellites have atomic clocks on board. These are continuously monitored by ground stations and provide the triangulation capability and accuracy for navigation and positioning. (Consider that light travels approximately 30cm in a nanosecond and then one has the basis for slaving a local oscillator to visible GNSS satellites.)

By the mid-1990s GPS started to be used as a timing reference for telecoms. In the UK, BT were the first carrier to adopt GPS to frequency stabilise Rubidium atomic and quartz oscillators at major switch sites. GPS was not the primary reference source; this was, and still is, a cluster of Caesium atomic clocks.

Timing performance can be measured quite easily and displayed using an ITUstandardised metric known as Maximum Time Interval Error – i.e. the time interval error over varying observation periods and relative to a higher stability reference.

GPS is indeed a great achievement and it became somewhat taken for granted. In 1996, when GPS was starting to become accepted as the solution for frequency stability in telecom networks, the time aspect had yet to emerge. However, in the early 2000s, 2G mobile phone technology was emerging which, for some 2G standards, needed precise time at the base station as well as frequency stability. 3G, 4G and 5G also need precise time. Clusters of small cells need to communicate with each other in a synchronous manner which requires precise time at the edge. This need coincided with the transition from Synchronous Digital Hierarchy to Carrier Ethernet-based networks and the development of Synchronous Ethernet to provide traceability to the central reference clocks. An alternative to Synchronous Ethernet is the use of IEEE Precision Timing Protocol as the mechanism for transporting time and frequency over Ethernet networks. Although this is proving a successful technology for frequency, it is not quite so effective for time; it may yet be necessary to deploy GPS at the edge.

GPS at the edge seems the ideal solution but there are a number of issues.

  • The cost of deploying roof antennas is a major concern.
  • Reliability and continuity are critical for mobile networks but, in order to reduce the price of GPS receivers to meet the edge of network cost model, holdover stability is the first casualty, resiliency the second.
  • There is an emerging threat from lowcost GPS jammers which are readily available although their use in the UK contravenes the Wireless Telegraphy Act.
  • Space weather can disrupt the GPS service with unpredictable results.
  • Spoofing is another threat – a concept based on rebroadcasting the GPS signal with different time and position information.

One potential solution, at least for fixed infrastructure, is the terrestrial transmission of a complementary Positioning Navigation and Timing service known as eLoran. It works indoors and is not vulnerable to the same jamming and spoofing threat as GNSS. It does, however, have geographical limitations. eLoran is at a different technology readiness level to GPS and can’t yet be relied upon for synchronisation and timing.

Where will we be in another 10 or 20 years? If the lessons of the past including cost reduction, miniaturisation and technology hybridisation are to be learned we will have eLoran-type receivers embedded in all fixed infrastructure applications. We won’t have to worry about roof antenna installations and the whole thing will be less than a few dollars.

This is an executive summary of the full article by Prof Charles Curry, BEng, CEng, MITP, FIET Charles, Managing Director of Chronos Technology, which appeared in The Journal, Volume 10, Part 1 – 2016.