Insight: Deployment of 3G/4G technologies in Pakistan

The telecommunications sector of Pakistan has seen outstanding advancements overthe last decade as a result of trade and investment liberalisation, privatisation,the creation of competitive environment and an openness to modern technologies. Mobile coverage has been extended to rural areas enabling many workers to have mobile phones once considered a luxury.

The total number of 3G/4G subscribers reached 37 million at the start of 2017 and Pakistan is tenth in theworld for mobile broadband subscriptions.In 2014,$1.1Bnwas
raised in Pakistan for the auctions of 3G and 4G spectrum.A further auction for Next Generation Mobile Services spectrum is planned for 2017.The base price for 2 x 10MHz blocks in the 1800MHz spectrum is set at $295M with the successful bidder having
the right to run a technology-neutral network.

Before realising commercial benefits, operators have had to face some new challenges with 3G and 4G technologies including:
• License cost – the 2014 auction raised a total of $903m from the four winners for 3G and $210m from the only operator to bid for 4G. These sums put huge financial pressure on the operators who also faced significant capital investments (in excess of $3.3Bn over the last two years). That said, since the auction,mobile operators’ revenues have increased by 60% which highlights the demand.
• Network coverage – this plays an important role in increasing 3G/4G adoption.Most developed countries considered 3G as the way forward when their objective was broadband penetration and delaying deployment of 4G allowed operators to recover their investment in 3G.The number of mobile phone users exceeded 136 million by December 2016 with the number of mobile broadband subscribers reaching 40 million.
• Mobile data traffic – mobile Internet is growing significantly and is driving the need for higher capacity. The average smartphone data consumption globally increased 18 times between 2011 and 2016 and is currently 150 MBytes/month.
• Affordability – the cost of a 3G smartphone has fallen to as little as $50. The cheapest LTE-enabled handsets are priced at over $200 which is a barrier to the adoption of 4G.A recent survey revealed that more than 17% population is willing to pay about $12 per month for 3G services. Mobile companies are offering 3G packages that are within the affordability limits.
• The number of broadband subscriptions in Pakistan, including 3G and 4G, exceeded 18
million in August 2015. Ovum forecasts that by end-2019 in Pakistan there will be 103
million 3G subscribers representing about 58% of the mobile market overtaking the 2G subscriptions. LTE subscriber numbers will still be relatively modest,reaching about 6.6
million by the end of 2019.

The benefits to society and economic growth of 3G/4G services include:
• Education – this includes the delivery of on-line real time/interactive educational facilities; enabling teachers to provide online individual guidance.
• Healthcare – this includes real-time data collection and health record access; analysis, diagnosis and consultation; disease/epidemic outbreak tracking; and health/administrative systems.
• Government – potential e-government applications include on-line systems for land, vehicle and other property transfers; smart grids for smart meters and sensors that will manage power stations and energy transmission lines; disaster and crisis management; alerts for jobs, training and guidance about higher education.
• Citizens – mobile data services are redefining the daily activities of people. For example,mobile banking provides a new level of convenience and safety for
ustomers and projects under the ‘safe city’ umbrella have been undertaken including networking of 100s of surveillance cameras.
• Economy – a study showed that, in developing countries, a 10% increase in broadband penetration accelerates economic growth by 1.38%.

Countries that adopted 3G/4G technologies a few years’ back have reaped the benefits within their economy and society. 3G/4G services have empowered citizens by transforming their way of living, learning, working and playing thus making their life more productive, secure and meaningful.

This is a summary of the full article which appeared in The Journal, Volume 11, part 2. Read it here (free for members).

 

 

Advertisements

Insight: Transforming telecommunications service execution

Software increasingly underpins the services enterprises offer and the associated operational processes. This is particularly true in the telco sector where time-to-market is vital. However, whilst software is a key enabler, operators quickly discover that the maintenance of software is complex and expensive.

A recent estimate suggests 60% of global software spend (approximately $50Bn) is on software maintenance.Thirty percent of this is associated with software comprehension – dealing with the inherent complexity of software. Other factors include insufficient
governance of the software development process and a lack of comprehensive
documentation of the final software product. Often the only comprehensive documentation of software, essential to plan new versions and undertake maintenance, is the software source code itself.

The latest developments in the software industry are aimed at improving software governance and the automation of documentation creation. One such technology is Software Product Line (SPL).

SPL is an innovative software governance and reuse approach. It introduces a robust
technique which enables the planning and creation of reusable software units at the start of the software development. It is distinguished from most other software
development methodologies by the comprehensive scope of reuse – not just source code but all the types of software assets involved in the development lifecycle, such as user requirements, architectures and test plans.

SPL is defined by two development processes, 1) domain engineering and 2) application engineering. Domain engineering creates a reusable software platform whereas application engineering derives different versions of the product from this platform.The reusable software platform consists of different software assets such as requirements models, architectural models, software components, test plans, and test designs.A group of software assets defining a specific capability or functionality are grouped as a feature. Each new software product created by SPL is defined by a unique combination of features.The relationship between features are captured in a feature model which defines the product line. The feature model and the reusable software assets are fed to the application.

Many organisations are using SPL to achieve extraordinary gains in productivity, time-tomarket, and product quality. A good application area of the use of SPL in the telecommunications industry is in Network Function Virtualisation (NFV).NFV aims to address cost reduction and flexibility in network operations – network functions are
implemented as software running over a virtualised infrastructure and provisioned on a engineering process. Here, different features are selected to determine which software
assets will be included and configured in a new product.The output of this process is a collection of configured software assets, typically a new (version of) product or
software, ready to be deployed on a service-by-service basis. NFV can benefit from SPL by adopting a systematic method for customising network services to accommodate diverse requirements. Not only does this enable the proper planning of NFV platform software artefacts, but it also enables a robust customisation of the chain of service functions that can adapt rapidly to situations, such as fluctuations in the network execution environment, and service failures.

The challenges of SPL methodology include managing the variability information; lack of suitable tools to fully support the methodology; legacy systems where knowledge of the architecture and software components is not available. To address this, BT and Etisalat commissioned research by EBTIC (an ICT research and innovation centre established by Etisalat, BT, and Khalifa University and supported by the United Arab Emirates ICT fund) into the development of the EBTIC-SPL tool.The tool is currently being trialled and has demonstrated the potential for improved software comprehension and governance.As software increasingly underpins the services that telcos deliver, the importance of
understanding and applying the best software engineering practices is critically
important to service quality, speed of new  service delivery and service flexibility. SPL is one of the most important approaches that telcos should be considering.

This is a summary of the full article which appeared in The Journal, Volume 11, Part 2. Read the full article here (free for members).

Insight: Telecommunications on the ring of fire

New Zealand is a seismically active country so close attention has always been given to mitigating the potential impacts on telecommunications.

Examples of mitigating activities include:
• Core transport networks are designed with geographic diversity.
• Stringent engineering formulas have been used for network deployment.
• Telecommunication buildings have a high intrinsic tolerance of earthquakes.
• Installations all have standby power (whether batteries and/or generators).

On 14 November 2016, the South Island was hit by a massive quake some 95km from
Christchurch affecting the whole of the eastern side of the upper South Island and
isolating the Kaikoura andWaiau townships. The major impact on telecommunications was the loss of the Eastern core fibre route, co-owned by Chorus, Spark and vodafone. It
was severed in multiple locations due to massive landslides over the coast road and
tensile stresses where the surface ruptured. The district became a telecommunications
“dead zone” with telcos blind to the impacts as surveillance and network management
links were lost.The Kaikoura local exchange remained operational using power from
batteries and a built-in generator. Further out and especially in the more remote rural
areas, customers either lost service immediately or progressively as the batteries
of the remote installations ran out. Airborne reconnaissance revealed the full extent of
damage; reconnecting Kaikoura to the rest of New Zealand was not going to be easy.

Initially installation of a new, and reconfiguration of an existing, digital microwave radio link re-established limited mobile coverage and a full emergency services paging service with the remaining capacity for telephony. Further radio augmentation was considered but the favoured remedial solution was to exploit the only intact fibre link in the area which was the vodafone “Aqualink” cable, a near-coast marine cable between Christchurch and Wellington.The cable comes ashore 20km south of Kaikoura and, just north of Kaikoura, there is a land-based repeater before it reenters the sea.The terrestrial section of Aqualink suffered damage but continued to work. It would be possible to intercept the cable at the repeater site and,with a minimum fibre lay, connect it to the “normal” Eastern core fibre link. With goodwill – and some horse-trading – normal mobile and fixed line services (including broadband) were restored just four days after the quake.

Damage to the Eastern core fibre covered a distance of over 84km.Although accessible in some areas, in many others it was buried under hundreds of tons of rock. Where the actual fault location was not accessible, restoration was by a installation of cable overlays from tens to 100s of metres. Extensive use of helicopters achieved many of these overlays.To date the traffic into the Kaikoura area has been left on the Aqualink as the works to reliably restore the Eastern core fibre will take over 12 months.

End users’ access in the Kaikoura area is predominantly over copper with some radio to more remote areas. Impacts were greater in the rural areas; tensile forces pulled joints apart or fractured copper pairs. Displacement of bridge abutments and culvert crossing also caused cable failure. Cable testing indicated that some cables had been significantly stretched. Aftershocks were a problem often resulting in another fault to a previously-repaired section. For customers fed by radio, the main problem was power failure once their battery supply was exhausted, typically within 24 hours. Standby generators were deployed but had to be airlifted by helicopter and the refuelling runs added significant cost. Helicopters were used to transport the technicians and special precautions had to be implemented to ensure staff safety.

A number of lessons were learned from this
event:
• The importance of having good civil defence arrangements in place was key to managing the situation.
• Getting to the faults quickly is vital; ingress of water into a cable or leaving batteries in a discharged state for too long exacerbates the fault.
• Events such as this remove any complacency about ensuring fibre optic links and electronic systems have sufficient diversity.The “Aqualink” cable sharing solution shows that there is potential to look further into creating physical network sharing contingency
plans.
• The New Zealand Government is undertaking a review of the level of resilience across the entire telecommunications sector and the extent to which telcos were able to able to fulfil their legal civil defence emergency obligations.

This is a summary of a full article which appeared in The Journal, Volume 11, Part 2. Read the full article here (free for members).

Insight: Digital infrastructure & the impacts of climate change

Climate change adaptation is about building in resilience to things like flooding, storms and sustained high temperatures.

In a policy context, it means continuing to enjoy our quality of life by ensuring the systems on which we rely still function adequately – systems such as electricity, water, sewerage, transport and telecoms. These systems are heavily interdependent; our water supply cannot work without electricity, air traffic control cannot work without digital communications.

Adaptation planning previously focused on improving resilience of individual services; the UK Government can require certain infrastructure sectors to report on their climate change readiness. more recently, attention has shifted to interdependencies, firstly to the dependence of most sectors on energy and now to the dependence on ICT.

Government recognised it is dealing with a complicated system of systems; DEFRA invited the ICT sector to report on its readiness for climate change with Ofcom invited to cover communications. techUK, an industry associated in the ICT sector, drew together the submission for digital infrastructure, i.e. fixed and mobile access networks, core networks and data centres. It described the climate change risks and the possible impacts, identified sources of information on resilience planning, and noted the barriers to the development of adaptive capacity. mindful that much of the fixed line infrastructure is delivered by one provider with its own well-developed corporate risk
plan, the submission focussed on the UK data centre estate.

UKCP09 (UK Climate Projections) provides the primary information source on scenarios for rainfall, temperature and humidity although it is not clear how widely these are used by operators.The Environment Agency’s “Flood map for planning” provides localised flood risk information and is extensively used.The extent to which the sector is aware of other Environment Agency data, such as surface water modelling, is variable.

Digital infrastructure is relatively resilient to climate change; its asset life is relatively short so more resilient assets can be deployed as part of the replacement cycle, and there is more built-in redundancy in ICT infrastructures. On the other hand, the sector is highly dependent on energy and there are interdependencies within the digital infrastructures sector that can be complex to analyse. Physical impacts to climate change
threats include flooding of buildings and ducts, silt and salt damage, scour of cabling and foundations, problems of access for staff, disruption to logistics, cable heave from uprooted trees, lightning damage, wind damage, higher costs of cooling, and stress
on components.

Data centres compete on the basis of resilience; the more important the data, the more resilient the facility.This is usually achieved through “redundancy”, which carries both capital and operational costs. Data centre availability classes are described under EN 50600 and there are other generic risk standards such as ISO 31000. Scenario
planning for emergencies is common.

Making the business case for investing in something that may not be needed can be a barrier and the dependence on other sectors, especially energy but also transport and
physical “pinch points” like bridges that carry multiple utilities,make it particularly complex to analyse and justify. External barriers include a policy focus on protecting physical assets rather than on business or service continuity.Also,regulatory policy driving price competition can lead to unintended consequences on resilience.

The UK’s digital infrastructure has experienced localised interruptions in service. It has implemented changes following flooding in york and Leeds in 2015 and has learned from Japan where prior planning ensured that data centres there escaped serious damage  rom the 2011 tsunami.

techUK’s submission to Government recommended:
• A more standardised approach to the climate change projections so that all sectors are using the same dataset.
• A policy approach that accommodates service delivery rather than just asset protection and a more robust approach to dealing with inappropriate flood plain developments.
• A more proactive process for identifying single points of failure in physical infrastructure following incidents such as the road bridge failures at Tadcaster and Cockermouth.
• Hinted at some regulatory aspects might be revisited to ensure that they do not result in unintended consequences.

The important thing is that operators are aware that climate change risks exist, that they have to be actively managed as part of the risk portfolio and that, just like risks from
terrorism, they are constantly changing.

This is a summary of a full article which appeared in The Journal, Volume 11, Part 2 (free for members).

The Internet of Things – is it all over the top?

Around 10,000 years ago, humans started to live in settled communities. They became farmers and established an enduring connection between mankind and nature. That connection is at the forefront of the Internet of Things.

untitledMeet Bob Dawson. Bob has driven tractors and combine harvesters for nearly 40 years. For the last three years, he has driven both vehicles at the same time. He sits in his combine harvester while it is steered via GPS. On-board applications measure the crop yield in real-time passing information back to analyse for future sowing and spraying. The driver-free tractor is being driven by the same application as in the harvester cab. The trailer’s sensors monitor when it is full and the harvester automatically switches off the grain chute and tells the tractor to take the trailer to a waiting truck. Precision agriculture is with us and the Internet of Things (IoT) is at its heart.

What is the relationship between the IoT and Over The Top (OTT) applications, and what are the implications of IoT for customers, for network operators and for society in general?

There are four umbrella elements of the IoT:

  • Devices – A truly interconnected world could have hundreds of billions, even trillions, of devices. For example, every piece of packaging for every prescribed drug, every food wrapper. Every few metres of every stream in the world could have its own device in the water enabling analysis of water levels, quality, climate and sustainability.
  • Connectivity – for many IoT applications, the connectivity requirements are ubiquitous coverage, low unit costs, low data rates over many years. There are other applications that require high speed, low latency and massive bandwidths. There are many fragmented alliances and consortia and, if some succeed, it could be a crack in the mobile operators’ defences as the gatekeeper for business-grade mobile connectivity.
  • Applications – Uber has become a wellknown OTT taxi-hailing application. But Uber has bigger goals – to remove the need for people to own, or even drive, cars and to remove the need for towns to build any more car parks, where cars sit doing nothing all day while their owners are at work. Applications are often seen as the place to be in the value chain, as they are perceived to be where the value flows to. Barriers to entry are small – it needs network connectivity to run but does not require the negotiation of a direct relationship with the network operator.
  • Analysis – the volumes of data produced by IoT devices and applications, combined with unstructured, qualitative data such social media feeds, means that “data science” is a critical skill. The automated nature of IoT means that much of the interpretation will itself be done in an algorithmic, or possibly more “neural network learning” way by machines themselves.

What are the implications for network operators? Consumers are more than willing to purchase applications and services direct from third-parties minimising their dealings with fixed and mobile operators. IoT could extend this separation dramatically. Operators therefore will have to build networks and carry data packets in such a way that unit costs falls more quickly than the price they can charge. At the same time, operators have vast amounts of network data and customer / device data. They will have to develop their own data analysis skills, both to improve their own business and to sell insight-based services to others.

And the implications for wider society? If an individual driver has a car crash, then that driver might learn for next time. If an autonomous Tesla car has a crash, then all Tesla cars in the world can learn for next time. A world in which high-quality interconnected networks enable new applications and services to launch rapidly to reach and connect consumers, citizens and devices over the top of those networks ought to be a good thing. At the same time an interconnected network is only as secure as its weakest connection. It can be hacked.

IoT has the potential to become embedded in almost every aspect of society and so its adoption raises questions of balance between individual, social, political and economic goals. Solving these is likely to be a series of steps and iterations – rather like a human version of a self-learning network.

This is a summary of a full article which appeared in The Journal, December 2016. To access the article in full visit the ITP website (free for members).

Back in the day…

The good old telephone service has gone though many changes during its lifetime but perhaps the most significant was the move from analogue to digital, reflects Professor Nigel Linge.

Naturally, the human voice is inherently analogue but transmitting it as such makes the untitledresulting electrical signal susceptible to the impact of noise and attenuation, leading to a reduction in overall voice quality. However, in 1938 a radical alternative technique was proposed by Alec Reeves, who was working at International Telephone and Telegraph’s laboratory in Paris.

Reeves proposed that the analogue signal should be sampled at regular intervals, with the amplitude of the voice signal being converted into a binary number and then transmitted as a series of electrical pulses. So long as these pulses could be detected at the receiver, the original analogue voice could be reproduced without degradation. Known as Pulse Code Modulation (PCM), Alec Reeves was awarded French Patent No. 852 183, on 3 October 1938 for his ideas which in effect heralded the dawning of the digital age. Unfortunately, as is often the case with pioneering ideas, the technology of the day was not capable of realising the complexity of PCM.

In fact, it was not realised until 1968 when the GPO in Britain opened the world’s first PCM exchange which was the Empress telephone exchange near Earls Court in London. This was the first exchange of its type that could switch PCM signals from one group of lines to another in digital form and it laid the foundations for the more widespread use of digital switching that now sees PCM at the heart of our fixed-line, mobile and IP-based telephony along with all our digital audio systems.

At the other end of the scale, and seemingly trivial in comparison, BT changed the way domestic telephones were connected to their network on the 19 November 1981 with the introduction of the plug and socket interface. Up until this time the telephone in your home was permanently wired to the BT network which meant that connecting a computer to the phone line could only be achieved using either an acoustic coupler or via a telephone with an integrated modem such as the Type No13A Datel modem set. The best speeds that could be obtained with such systems was typically 300bit/s. However, the introduction of the plug and socket interface in 1981 changed all of this. The telephone service provided by BT was now terminated in a ‘master’ socket into which the customer could plug their own phone.

More importantly, this meant that there was now a direct electrical connection to the external phone line which provided a more efficient mechanism for connecting a computer via a modem. In 1988 the V.21 modem increased speeds to 1.2kbit/s; in 1991 this was extended to 14.4kbit/s with the V.32 modem; and ultimately in 1998 speeds reached 56kbit/s with the V.90 modem. Thereafter the introduction of Digital Subscriber Line technology led directly to today’s superfast services and all thanks to introduction of a simple socket.

Today with 15 per cent of UK households now officially declared as mobile only, there is a slow but growing trend away from traditional fixed-line telephony. An important step on that journey was made on the 14 December 2009 when the Scandinavian telecommunications company, TeliaSonera, became the first operator to commercially launch a publicly available LTE (4G) mobile network. Back in 1981 Scandinavia had led Europe into the mobile era and now in 2009 it was leading the world into 4G deployment with services opening in the central parts of Stockholm and Oslo. The network infrastructure was provided by Ericsson in Stockholm and Huawei in Oslo and was initially targeted at mobile broadband customers using a Samsung provided LTEonly USB dongle. Proper 4G handsets took a little longer to materialise but once again Scandinavian companies led the way in Europe when the Samsung Galaxy S2 LTE became available to their customers in 2012. Later that year the UK witnessed the launch of its first 4G network. Today there are over half a billion 4G subscribers across 151 countries with, interestingly, the UK now cited as offering some of the highest average 4G download speeds in the world.

Mobile consolidation

For most of the past 20 years, competition authorities in mobile markets have focussed on securing the entry of additional competitors. Markets with high entry costs, like mobile telecoms, would not be expected to accommodate a very large number of firms but some degree of competition between a number of firms – more than one but not many – delivers better results.

Entry into mobile is restricted by the availability of radio spectrum. Opportunities arise by releasing spectrum from broadcasters or the military. But later entrants then face the challenge of competing with established firms. This was fine when demand was growing but today later entrants have to compete for existing customers of established operators.

But by the time of 4G (after 2010), interest in entering mobile markets had largely evaporated. The amount of new spectrum available was more limited and there had been some poor commercial returns by new entrants to 3G. Rather than promoting entry, 4G was driving the market towards consolidation. Firms like Hutchison were reluctant to invest in 4G when they had struggled commercially with 3G. Other firms, like Telefonica, felt that they could deploy their capital more profitably in emerging markets in Latin America.

Pressures on later entrants increased following global financial crisis after 2007. Moreover, by 2010, operators were also feeling the effects of competition from over-the-top applications such as WhatsApp and tighter regulation of international roaming charges and interconnection rates. But the main driver of consolidation was simply that late entrants found themselves unable to achieve sufficient scale to be profitable within a single technology cycle.

Rather than winning customers from rivals, the other way to achieve both scale and cost savings is through mergers with rivals. These savings mean that a rival operator can invariably offer a higher purchase price for the asset than a buyer that does not already have operations in the market. The other option for sub-scale or unprofitable firms was to exit the market by selling out to another party who is outside the mobile market. An example of this was EE, which was acquired by BT.

Consolidation can provide an escape route for sub-scale firms but it has less obvious benefits for consumers. The European Commission is concerned that prices will not be as low after the merger as they might otherwise have been. The advocates of mergers claim that, in the longer term, the cost savings from combining assets and operations will offset some of the upward pressure on prices that might otherwise be associated with a reduction in the number of firms. The Commission has generally rejected these efforts, finding that any savings will more likely bring higher profits for the owners of merging firms rather than being passed on as lower prices.

The other and more interesting claim relates to future investment. Advocates of mergers claim that the merged firm will be better able to invest because of its greater scale and/or higher levels of profitability – claims that are extraordinarily difficult to assess.

Consequently, competition authorities have only been prepared to approve mergers if the parties are also prepared to take various steps to replace the firm that is exiting with another entrant. But why, when one set of investors were seeking to exit, would another set be persuaded to enter? The predictable lack of interest has prompted authorities to promote various models which would allow new firms to enter the market at lower cost and risk by using the merged firm’s existing network as a Mobile Virtual Network Operator for an extended period.

The most interesting and immediate question is what happens to those firms whose merger plans have had to be abandoned. Do they sell to a party from outside the market at a lower price? Do they find another way to grow or to become profitable? In Europe, such firms can pursue the ‘failing firm’ defence, arguing that without a merger the firm will exit the market altogether.

The current debate reveals how little we actually understand about what determines the performance of these markets. We know monopolies are generally to be avoided, but we know very little about what might ensure higher levels of investment, how these investments might translate into prices, quality or other outputs which consumers care about.

This is a summary of a full article which first appeared in The Journal, December 2016. To read the article in full please visit the ITP site (free for members). 

 

 

Lateral security: networked immunity everywhere!

At a modest estimate the ‘Dark Side’ should be overpowered by the ‘Good’ of the computing world by at least 3000:1. But how well is the ‘Good’ doing? Peter Cochrane explains…

Our governments, companies, banks, institutions and security services are more than a match for the rogue states, organised crime, hacker groups, and those lone sharks huddled over screens in a multitude of bedrooms. The Good has more manpower, compute power, facilities, knowledge and money by a huge degree, and yet the Dark Side continues to prosper! How come?

It is all down to the power of networking. One side operates in a secret ‘need to know’ mode whilst the other is of necessity ‘need to share’ – it is as simple as that. The Dark Side are the ultimate networkers and sharers, and the magnification effect is exponential.

So what can the Good do to win? Firewalls don’t work and malware protection is always after the fact, a band aid applied to a known and already serious threat. The Good are also slow to detect incursions and even slower to respond, in effect, always on the back foot. We need to be pro-active, fast and anticipatory; then and only then can we hope to turn back the tide of the Dark. If we do not, we are already hatching a new and far worse nightmare called the ‘Internet of Things’ – or more correctly, ‘Clouds of Things’. The potential risks are obvious and the solutions non-existent. Today’s design and build of the IoT is so badly flawed it is bound to end in tears.

We probably have one big shot at creating an effective defence mechanism. This is founded on the established biological principles of white cells and auto-immunity.

Building hard and soft malware traps into every chip, card, device, shelf, rack, suite, room, building and network will cure the problem. The automatic detection and isolation of malware, followed by removal and destruction is a necessity because people cannot do it; this appears to be the only response likely to disrupt and put the Dark Side on the back foot. If the organisations and people of the Good will not network and share, then their hardware and software has to do it for them.

Is such a proposition viable? Some big players are looking at it already and the hardware and software overhead appears minimal. And so we might conjure a number of future scenarios, but the most iconic goes something like this. A man walks into a coffee shop with an infected mobile which tries to infect everything on WiFi and Bluetooth that is in range.

But these devices recognise or suspect an attack and isolate the infected device. They then collectively search out the ‘antidote malware remedy’, and upload it to attack the infection. Once confirmed as clean, the mobile device is accepted back into the community and allowed to connect and communicate.

This might all sound complex and cumbersome, but it turns out not to be so and such detection and immunisation cycles can occur in seconds unnoticed by the human owner. Better still, we no longer need to get involved in security as individuals; displaced by machine intelligence we are left to get on with what we do best – creating, solving, building and changing. Where does the ultimate responsibility then lie? The producers and supplier of hardware and software have a new product line, service and responsibility

Of course, the Dark Side will try to subvert all this, but by then it could be ‘game over’ and too late. I just hope the Good get off the grid and cross the winning line really soon!

This article first appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

Dr Peter Cochrane, OBE, BSc, MSc, PhD, DSc, CGIA, FREng, FRSA, FIEE, FIEEE.peter

Peter is an entrepreneur, business and engineering advisor to international industries and governments. He has worked across: hardware, software, systems, network, adaptive system design and operations. He currently runs his own company across four continents, is a visiting Professor at Hertfordshire University and was formerly CTO at BT and has received numerous awards including an OBE and IEEE Millennium Medal.

Back in the day

From murder most foul to Integrated Services Digital Network and 3G, January, February and March have seen some extraordinary highlights in the telecommunications world, says Professor Nigel Linge.

On 1 January 1845 John Tawell entered a chemist’s shop and bought some Scheele’s Acid, a treatment for varicose veins that contained hydrogen cyanide. He travelled to Salt Hill near Slough where he met his mistress, Sarah Hart, whom he then proceeded to poison with the acid. Sarah’s screams and cries for help were heard by a neighbour but John ran off and made his way to Slough railway station where he boarded the 7:42pm train to London Paddington. Unfortunately for John, Slough and Paddington stations had been fitted with a Cooke-Wheatstone two needle electric telegraph system.

Sarah Hart’s neighbour raised the alarm and the local vicar pursued John to the station where he asked the Station Master to signal ahead to Paddington and alert the Police. However, the telegraph system could not send the letters J, Q or Z which created problems because the vicar said that John was dressed like a Quaker! The word Quaker had to be sent as Kwaker which caused the telegraph operator at Paddington great consternation in understanding it; even after retransmission. Eventually the message was handed to Sergeant William Williams who was given the task of tracking down and arresting John. At his trial, The Times newspaper reported that, ‘Had it not been for the efficient aid of the electric telegraph, both at Slough and Paddington, the greatest difficulty, as well as delay, would have occurred in the apprehension’. John Tawell was hanged at 8am on Friday 28 March 1845 and thereafter became known as ‘The Man Hanged by the Electric Telegraph’.

tel2The electric telegraph was the country’s – and the world’s – first data network. Of course data in that sense was the written telegram and the network never reached into our homes. However, with the emergence of the home computer and our onward drive towards digitisation came a demand for public data networks for both business and domestic consumers. In response to this on 7 February 1991, BT launched their Integrated Services Digital Network (ISDN) service. Originally developed in 1988 by the CCITT (ITU), ISDN provided a digital connection comprising two symmetric bi-directional data channels (2B) each operating at 64kbit/s and a 16kbit/s signalling channel (D). This basic rate 2B+D service offered much higher data rates over its competitor technologies and proved especially popular with the broadcasting industry where the guaranteed data rate with its low latency was ideal for high quality voice and music transmission.

BT developed and marketed its ISDN service as Home Highway but in 2007, withdrew it from domestic customers because of the rise in the popularity and capability of xDSL broadband access. As of 2013 there were still 3.2 million ISDN lines in the UK but this number is falling year on year. Within Europe, ISDN was most popular in Germany where at one point they accounted for 20% of the global market.

Delivering data into the palm of your hand offered a different challenge but took an important step forward on 3 March 2003 when the first mobile network to offer a 3G service was launched in the UK by a new entrant into the mobile marketplace. Telesystem International Wireless (TIW) UMTS (UK) Limited had the backing of Hong Kong based Hutchison Whampoa but soon Hutchison bought out TIW to create H3G UK Limited which, having acquired spectrum from the infamous UK 3G auction, marketed its new service under the more familiar ‘Three’ brand. Choosing to launch their service on 3/3/3 was therefore an opportunity not to be missed! Quite how much network coverage was available at that time remains a point of conjecture. Nevertheless, the UK had entered the 3G world with the first public 3G call being made by Trade and Industry Secretary, Patricia Hewitt, who called Stephen Timms, Minister for e-Commerce.

tel3
Three launch devices: NEC e606, Motorola A830 & NEC e808

The move to 3G brought with it the promise of higher data rates and at the time of launch, Three offered its customers a choice of three different handset options, the Motorola A830, NEC e606 and NEC e808. As is often the case, these first generation handsets were actually poorer than their predecessor technology being bulkier and suffering from poor battery life. That aside, by August 2004, Three has connected one million customers.

This article first appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website

Packet versus voice switching

The prevailing view is that moving voice to an IP solution gives many advantages, especially cost. What are the economic arguments for switching voice calls in packet rather than circuit mode, putting to one side the technical and quality-of-service issues?

Some inter-related themes form the big picture of telecommunications networks today:

• The big architectural difference between circuit and packet-switched networks is the location of the service control – within the network with circuit-switched, at the edge with packet-switched. Control at the edge and use of an essentially dumb packet network enables ‘over the top’ service providers, such as WhatsApp, Skype, FaceTime, to have the commercial relationship with the users.

• There is a growing sentiment that always-on access to the Internet is a basic human right. Unfortunately, there is a tendency also to devalue content, insofar as people do not care to have to pay for it.

• Many network operators are considering how best to replace their circuit switches forming the PSTN. Replacement by IP systems has proved difficult and many operators are waiting to take advantage of the shift of voice away from the fixed PSTN onto mobile. Despite this, the fixed PSTN is still essential in most countries as a network of last resort and for interconnection.

On the question of whether packet is cheaper than circuit switching for voice, there are several points to consider:

  • Switching system costs – Many would say that packet switching is cheaper – after all, many supported services are free. Often overlooked though is the price users pay for the infrastructure supporting voice over Internet Protocol (VoIP) – the computer/tablet/smart phone, broadband access and Internet Service Provider service, etc. Therefore, the question is whether there is any inherent cost (as opposed to ‘price’). Interestingly, there is remarkably little difference between the elements of a circuit switch-block and those of an IP router; both usually comprise time-space-time switch with similar semiconductor technology. So, apart from differences in the costs of signalling, the inherent costs are essentially equal.
  • Terminating functionality – A profound influence on all network costs is the location of the interfacing equipment. Terminating a line on the exchange represents some 70% of the total cost of the switching system, costs incurred by the network operator. However, for VoIP services, the analogue-to-digital encoding, packetisation, powering, and ring-tone generation is in the users’ devices (a computer or tablet), the costs of which are borne by the user giving a cost advantage to VoIP providers. However, if a fixed operator hopes to use VoIP to replace its circuit-switches and if many of its users wish to keep their telephone and line, the operator will have to provide the terminating functions at the boundary of the packet switching system. Mobile operators do not have this concern as mobile handsets provide the functionality.
  • Multi-service platform – A single all-purpose platform supporting all services has long been seen as a way of saving capital and operational costs. This advantage is true with any technology, not just IP (indeed, earlier multiservice platforms were circuit-based).
  • Bearer traffic loadings – The potential loadings of 85% or higher with packet networks compare favourably to about 70% in circuit-switched networks. However, such loadings on packet networks are avoided to reduce the probability of packets being delayed, particularly for latency-intolerant services such as voice. Lo 30% is typically required to ensure voice quality, so reducing any cost advantage of packet switching.
  • Industry economies of scale – There is a general move towards the use of IP technology for networks as there is with computer-controlled digital electronics in general. Since vendors’ prices are driven by economies of scale, today’s prices of packet switching benefit from this shift – which rather makes the economics of circuit versus packet switching a self-fulfilling prophecy.
  • On a like-for-like basis, therefore, there is no inherent cost difference between circuit and packet-switching technology. However, packet can benefit from the shift of the user network interface and the move to a multi-service network. The enthusiasm of fixed operators to move to VoIP has slowed because of the need to support existing fixed subscriber lines. The UNI location is not an issue for mobile networks; existing 4G networks will shift to all-IP architectures as the existing circuit-switched mobile exchanges are withdrawn.

This is an executive summary of the full article which appeared in The Journal, Volume 10, Part 1 – 2016. The Journal is free to all ITP members, to find out about joining visit our website