Thursday, November 22, 2018

Internet Security – Part 5: Installing better locks

https://ift.tt/eA8V8J
The original article was published at https://ift.tt/2PJjlZ0

What can be done to improve security?

So far this article has focused on the problems with the security of Internet facing IT systems. I hope it is obvious to all that much needs to be done to improve the entire landscape from the way that users authenticate their access to sensitive systems through to the responsibilities of companies and organisations to respect and safeguard personal data and possessions with which they are entrusted.

So, the truth is that much can be – and arguably should ready have been – done to improve data security on the Internet.

When reading the outlines below, please bear in mind that the information is presented back-to-front. The reality is that no single software developer – or even a company’s IT director – can implement a “security first” systems design and operation approach as is needed to deliver secure systems operable on the Internet.

The impetus and instruction for the changes necessary to improve the security landscape must come from the top. For smart companies that means the boards and directors taking the time and trouble to learn this “difficult techie stuff”to at least the level necessary to know the questions to ask and how to evaluate the responses received. Personally, I believe there is commercial advantage to be had to companies that can promise true security to their customer base.

Ultimately, the final top-down effectors of change are governments, both national and supra-national. If companies and organisations fail to get their acts in order to prevent the current volume and scale of abuse of personal data seen on the Internet then they must expect that laws will be enacted to enforce the necessary changes. Complaining about such legislation has all the effect and moral rectitude of arguing that of course you don’t have a driving licence after being involved in a traffic accident – after all you are entirely competent to decide for yourself your competence to drive a car as other road users reasonably expect. In the same way, arguing that you should be left alone to safeguard and do whatever you want with your customers’ personal data carries no more weight.

In simple terms, change must be driven from the top. In an ideal world, company boards would become proficient in IT matters and enact the necessary changes. In the real, practical world we inhabit I suspect that companies will only change when external forces – market expectations or a legal framework carrying truly painful penalties – force change.

Technical changes

The design, construction, deployment and operation of commercial IT systems is such a diverse and complex area that it is impossible to do more than cover some general principles in an article such ast his. There is, I believe, one principle that should be kept in mind by anyone responsible for the design or implementation of a complexIT system – that principle is the concept of control.

What do I mean by “control”? Simply the degree to which you can retain control over components of the system being deployed. There is a modern trend to treat complex IT systems as mere collections or assemblies of building bricks – the idea being that a complex system is built by combining the “best in class” component specialist sub-systems.

So, we see systems constructed out of some supplier’s accounting system, another’s inventory system, another’s customer support system, another’s marketing and customer relations system, an off-the-shelf customer engagement system based perhaps on a blogging platform “repurposed” for the need …. and so on.

Systems that are cobbled together like this represent the worst examples of expediency over long-term usability and control. Actually more expensive to put together (every component must have individual interfaces to other components with which it must share data or processes). Data is spread around like confetti between different, competing databases – each of which must be updated and kept instep with one another in real-time if anarchy is not to quickly arise. More purpose-written processes to develop, test and maintain. In production, such systems quickly become a nightmare to maintain as different components develop and require interface changes to schedules determined by their developers – the alternative being to continue use of now outdated versions with the security and reliability implications that involves.

It should be obvious that the costs and complexity of managing such systems destroys reliability while harming control enormously.

Put simply, organisations that use this approach to complex systems have no effective control – they are entirely reliant for the reliability and security of the overall system on the individual component suppliers – and their own ability to keep pace with changes required to the purpose-written interfaces between the components.

A simple piece of advice – understandable by the least technically proficient company director – complexity is the enemy of control. If you seek a system whose security and reliability you can control –keep it simple.

System design

The lesson here is simple – the design of systems must be driven by a “security first” culture. Every function, operation and change to the design of the system must be subject to consideration of its impact on the security of the system and the data it holds and processes.

It is important to understand that prioritising security need not negatively impact either the usability of the system or its flexibility and ability to adapt to an organisation’s developing needs.

All design / change decisions must pass a security review to assess what impact may occur to either/both the attack surface of the overall system and the potential to open exploits to directly affected system parts or – indirectly – by granting access to other privileged data or functions. Only after this security assessment is successfully passed should the design be passed forward for development.

Such an assessment should never allow an authentication system such as that deployed by Credit Agricole to ever be developed, let alone deployed for live use. Equally, had Facebook conducted full and proper assessments of the developer interfaces it introduced (driven by the desire to gain more users by enabling outside developers to develop widgets that would drive user engagement on the site …which inevitably involved exposing at least some user data to the interface) the ability to abuse the interface to elevate permissions in order to gain access to personal data sets that were not part of the intended design should have been recognised and the detailed design altered to prevent such abuse before development began. A day of consideration and reflection saves weeks of a CEO having to testify before government committees and courts.

When considering the security implications of any design decisions itis essential that he normal developer mind-set of “this will be great/exciting/fun” must be put aside in favour of a mind-set that puts the assessor in the mind of someone trying their hardest to attack or abuse the system. While the role requires a level of technical proficiency at least as high as that of a good developer the role should go to someone versed in “white hat” hacking –ie; trying to penetrate a system in an ethical manner in order to identify and report flaws capable of exploit in order that they can be addressed before being released to public risk.

System change management

All systems must be capable of change and adaptation over time to continue to meet the evolving needs of the organisation they serve.

However it is vital to recognise that the area of system change presents perhaps the greatest risk to the reliability, safety and security of a running IT system.

I have seen organisations where the marketing department has been allowed to simply demand that the IT department (and its system developers) implement or install some new function or technology that the marketing department “needs” – usually right now! The result quickly becomes a web site full of third party originated scripts whose working nobody can explain and after only just a few weeks nobody can remember why they were implemented or what purpose or internal process within the organisation they enable. To say that this way of working presents an enormous security risk is an understatement of such proportions I shouldn’t need to be writing it. Yet the practice can be seen plainly by anyone capable of calling up the code behind any web page and scrolling through counting the number of such scripts and odd code included. In the worst cases I have seen organisations that have deployed session replay technology (the most invasive personal spying technology currently easily available to web site operators) presumably because the marketing department “needed” to see how users were interacting with the website … quickly implemented without a single thought given to the harm caused to site visitors and the exposure to legal risk the organisation was being put to.

No matter how great the “need” or how urgent the desire to support / release some new product or service might be, system change is system change. ALL system changes must be subject to the “security first” mantra and be put through the same assessment process as any other change proposed to the system.

System operation and management

As already described, having even a perfectly secure system design gains precisely nothing if the system is then operated in a slap-dash way.

Many people (board directors, time to perk up – I may have you in mind here) fail to realise the amount of work required to keep a system up and running on the Internet 24/365. The absolute need for company boards and directors to understand how these complex IT systems work is covered below. For now just understand that even a simple website compromises an underlying operating system, web server software, database management software, and a panoply of web, server and client-side programming languages – oh, and let’s not forget the code that actually makes a web page display on a web-site visitor’s device.

All these layers of software and data are individually complex and all must be regularly backed up, maintained, occasionally restored both as individual components and as a whole.

When, say, the underlying operating system must be upgraded (see the section “System maintenance” that follows) the likelihood is that the server running the entire system must be rebooted –meaning that the website goes “off air” while the machine is restarting and, more importantly, any customers or visitors using the site will experience service interruption. In practice modern websites do not run on a single machine but on closely connected clusters of (usually virtual) machines that distribute the load placed on the service being provided, place sensitive services (such as the database) inside a network inaccessible from the Internet and allow individual machines to be updated or otherwise serviced without interrupting continuity of service to visitors.

It is important to understand that the majority of Internet services (eg; web-sites as being discussed here) are actually run on “virtual machines”. A virtual machine (“VM”) is simply one instance of a“machine” that is running (usually) alongside other virtual machines on a single physical computer server. Several technologies (which are outside the scope of this article) can be used to deploy virtual machines but the benefits of VM technology are several and bring many benefits including:

  • Cost reduction – a single physical server can run a number of virtual machines and make better use of the processing power available
  • Backup – the “state” (ie, a snapshot of everything the VM is processing and all the data it possesses) can be taken in a very short time and without interrupting the machine’s operation
  • New VMs can be created or destroyed at whim – if more processing power is required for example or a replica of a machine is required to test an upgrade or other new software a new VM can be created or copied very quickly – then used and destroyed when it is no longer needed
  • The individual functions required to operate a website (eg; a web server and a database server) can be placed on separate VMs and segregated so that access to the web server is possible from the Internet while access to the database is only possible from the internal network – helping to prevent bulk data loss or breach
  • In case of physical hardware failure or upgrade need the VMs running on that piece of hardware can be quickly or even seamlessly be moved to another physical machine allowing the first to be maintained

Like all technologies VM technology can be a double-edged sword. The same ease that allows a VM to be replicated, backed up or new VMs to be created at whim if not properly managed can lead to major problems.

Many cases of software update or change involve structural changes to configuration files or entire database structures. One of the (so far unsaid) benefits of VM technology is the ability to easily roll back a system to an earlier state – something that may become desirable in the even of data loss, system corruption or upgrade failure for example. But sometimes this alone is not enough to allow a security blanket in case something goes wrong during or after an upgrade. Take the example of a software upgrade that requires structural change to a large database. Sensible practice would be to take a complete copy of that database before making the structural changes – that way, should the upgrade fail in some way the original database can be restored and the state of the VMs that use it can be rolled back, restoring the use of the system while the problem of diagnosing and correcting what went wrong can be dealt with.

The next thing to understand is that many of the physical servers that run all the VMs are no longer present in data centres on an individual company premises. With the rise in popularity of cloud computing the likelihood is that these VMs exist “in the cloud”. In real word terms this means that a VM may as easily exist in a datacentre in Phoenix as Strasbourg, Dublin or London.

Data storage can also be virtualised in a similar way to a machine. So that database – which might occupy several Terabytes of storage is just another instance located somewhere in the cloud. And whereas in the days of real machines and proprietary company data centres a company would have to invest in some multiple of the actual storage capacity it needed to operate its IT services in the age of the cloud if a quick copy of a database is required during a system upgrade the answer is to simply create (a possibly better term here is “rent”) another storage instance of the required size, duplicate the data tot hat and just destroy it as soon as it is no longer needed – much more cost-effective.

Here comes the cutting edge of the sword. If the operator who creates the new storage instance omits to secure it (by setting passwords and ensuring it is inaccessible from the Internet for example) … then the entire database contents become publicly available.

As organisations right up to the super-secret American NSA have discovered to their cost this kind of simple, human error makes all the work put into securing operational systems worthless as the data is gratefully hoovered up by “the bad guys” in an instant. And a major data breach just occurred.

System operation and management – the right way

Having looked at how modern web sites or services actually exist and are delivered in practice and having identified some of the horrific consequences of making simple human errors how does an organisation preserve its security and protect itself from such horrors?

The answer is to go back to lessons I learned at the dawn of commercial data processing when giant mainframes the size of power sub-stations (but with less processing power than the phone in your pocket) ruled the computing universe.

Answer: Procedure manuals and check-lists.

It really is that simple – backed up, of course, by management oversight that ensures those procedures and check-lists are followed and adequately recorded to enable every process to be audited … and a corporate policy enshrined in employment contracts that defines failure to follow a defined procedure and rigorously complete each check in an auditable manner as an example of gross misconduct possibly leading to demotion or instant dismissal.

If that sounds simplistic and drastically harsh may I suggest you spend a day as a ‘fly on the wall’ inside a modern IT operations centre – and observe the young techies fingers fly over keyboards connected to multiple machines behind which lie dozens or hundreds of VMs gleefully displaying their advanced skills by “spinning up”and destroying VMs at whim and moving major datasets around with no more effort than offering a toffee from a jar.

All unrecorded and all subject to a single finger-slip that either destroys the live dataset – or puts a copy out on the Internet with no protection at all.

The experience should suffice to illustrate the need for procedure manuals, check-lists and good old-fashioned accountable management.

At the same time, adequate emphasis must be given to staff training.

  • Training on the essential need to follow procedures
  • Training on understanding the relationship of trust between an organisation and its customers – and just how important and valuable that trust relationship is to both parties
  • Training on adopting a “security first” culture – to both customer data and the organisation’s own data. No organisation should have a single member of staff that might fall for a human exploit – whether fake email phishing scam, phone call from IT support or click of a link on a dodgy (or, sadly, even reputable) web site.

System maintenance

I understand and support a company’s decision to base its services on an enterprise class operating system (such as Redhat Enterprise Linux or its open source variant Centos) rather than a leading edge, frequently updated variant such as Fedora, I do not understand corporate IT policies that insist that every little operating system patch must be subjected to its own extensive testing before installing it on their live systems. Do they not understand that (especially in the open source world based around Linux) every patch and software update has been exhaustively tested thousands of times on thousands of different hardware configurations before it hits the live release channel?

In these times of increased security bugs and flaws – never mind the fact that other software that may be updated contains bug fixes and performance improvements this behaviour makes absolutely no sense.

For over a quarter of a century I have run a mix of enterprise class and leading edge systems. All my machines are updated at least once every day as new software updates become available. In over 25years I can recall two instances where a nightly update has caused a problem. I defy any corporate entity that operates a deferred patch deployment policy to tell me they have suffered as few outages over a similar time period.

Simple fact: by the time a security patch is released knowledge of the underlying bug is already public knowledge and hackers will be busy trying to find machines that have yet to be patched. Remember those tens of thousands of attack attempts our little network see off each day? They represent hackers trying to probe our network and servers for known vulnerabilities – windows and doors that have been left unlocked.

On a balance of risk-benefit the benefit is clearly in favour of installing at least security related patches as soon as they become available. Not to do so invites systems to be compromised, corrupted or taken over by malicious actors.

Keeping a system up-to-date is only part of the maintenance job. An equally important function is …

Keeping abreast of the technology

As the CA example shows it’s just not good enough to install the current “best-in-class” security and forget about it.

The technology and IT security landscape is constantly changing.

As an analogy, developing and operating a secure web-site today is more like building a physical bank – then knocking it down to rebuild a more secure version every few months. That is assuming that steps have been taken to avert urgent threats as soon as they arise.

An essential requirement of maintaining a complex commercial web-site is to remain abreast of the technology and threat landscape to ensure that the web-site remains both relevant to its users and its owner and as safe as it can possibly be from external attack.

Any company director, banker or accountant should be familiar with the concept of depreciation – in short, the recognition that any asset has a finite lifetime by which it must be replaced if the job it has been doing is to continue. IT systems are not immune from this concept. Yet companies insist on treating their IT systems – in particular their Internet facing systems – as if they were old fashioned bank buildings. A construct that is built and then expected to last 50 or 100 years with only a new lick of paint every 5 years,a change of locks every 10 and a new vault every 20.

Modern computer systems depreciate (their fitness for purpose) far more rapidly. Companies need to learn that their Internet facing IT systems should be completely rebuilt at intervals of between 3~4years within which period they must be constantly updated to remain safe and secure.

In the past ten years (a period which has seen no major improvement in physical door-lock technology that I am aware of) has seen

  • the fundamental way that Internet users view and interact with web-sites shift from desktop (or large screen) devices to new form factors – to the point that today most web-site visitors view those web-sites through tiny mobile phone screens – in portrait format rather than the landscape format of old. We are now seeing the rise of voice technology. Soon, users will expect to have no screen or display at all – they will simply ask a device in the corner of the room (or their car, their watch or their house …) to tell them their current account balance or to transfer money to pay a bill. The technology to do this is available now – the challenge for systems designers is to deliver the desired functionality while maintaining watertight security and proof of identity.
  • FAR, FAR more important changes happen on the technology plain.
  • Advances in computing power have meant that security algorithms and mechanisms that were considered safe a decade ago can now be cracked in a few seconds. Still the second most commonly used password in use is simply “password” (the most common in 2017 was – if you can believe it “123456”!!).
    • If encrypted using the still commonly used MD5 algorithm (supposedly one-way – ie; once encrypted with MD5 it should be impossible to decrypt the original text) “password” is cracked in less than 3 seconds (most of which is the time it takes to communicate back-and-forth with the website over my satellite based Internet connection) using the website https://crackstation.net/ – if you’d like to try yourself, the MD5 ‘hash’ (the encrypted form of “password”) is 5f4dcc3b5aa765d61d8327deb882cf99. Even a highly uncommon password within which certain letters had been replaced with numbers was cracked in no more time. In passing, this is because so many websites have leaked so many passwords alongside their encrypted hash values – so, saying that this website is ‘cracking’ the password is not at all true – it is simply looking up the hash value in a database that contains previously obtained hash values and their original values. But even where no database entry is available, current computer hardware can actually crack an MD5 encrypted password within times ranging from a fifth of a second to around 20 minutes for a complex, 12 character password.
    • Information on the current best-in-class encryption algorithms and even instructions on how to implement them with most common web programming languages can be found at https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet – so there really is NO excuse for not implementing or updating better security. Most encryption methods used to store login credentials are of the one-way kind – ie; the original password or other encrypted text cannot be recovered from the encrypted value as there is no key. Instead, at login, the credentials entered are encrypted anew and the encrypted values are compared to the stored values. The “penalty” when a one-way encryption algorithm is changed is that, as there is no way to recover a user’s original plain-text password, the user must be asked to choose a new password. This is good practice in any circumstances and allows the requirement for complex passwords to be imposed at the same time. For sensitive sites (such as on-line banking) this is far from an unreasonable step to take – far more reasonable than allowing the site to remain vulnerable where user credentials have been exposed by data breach (even by another site – users, despite all the warnings, will insist on using the same password to login to multiple sites) or the algorithm currently in use is likely to become unfit for purpose within a year or two.
  • The language in which web pages are written (HTML) is now at version HTML5 and operates in a totally different way from the version in use a decade ago.
  • The threat landscape (the number of “doors” through which an IT system might be attacked) has grown exponentially. Coupled with the equally exponential number of mechanisms “lock-picks”) that might be used the entire structure of a commercial web-site and the way its components are opened or restricted from the web has to be rethought and systems redeveloped to take these factors into account.
Simple steps to maintain system security

In summary there are just a few simple steps needed to maintain the security of an Internet facing IT system:

  1. Apply software patches and upgrades as soon as they become available.
  2. Keep up-to-date with changes in at least the key technologies used to defend the system and its users from attack – at minimum these changes include encryption algorithms, programming languages, database engines and other key components.
  3. Regularly check all third-party scripts that are embedded in delivered web pages. Simple rule – the more third-party scripts you allow into your system the less control you have over its security and the safety of your customers – as you never know when a third-party changes their script to do something undesirable or is itself hacked with the effect that you now have a criminal exploit running inside your system.

Vulnerability assessment and Penetration testing

Good systems design, excellent operation and regular maintenance are not the end of the task list for operating a secure system. I have omitted testing as (I assume) everybody knows that software and configurations must be rigorously tested before being put into live production use.

All sensitive systems should be subject to constant vulnerability assessment and penetration testing by teams properly qualified to perform the roles. The aim should be to discover “open doors and windows”, bad design, inappropriate or improper use of external scripts or third-party supplied program code etc. Brief description of the aims and purposes of such testing can be found athttps://www.thesecurityblogger.com/defining-the-difference-between-a-penetration-test-vulnerability-assessment-and-security-audit/and here https://en.wikipedia.org/wiki/Penetration_test.

To understand why I will return again to Credit Agricole. In November2017 I had cause to write to the bank after discovering that confidential, personal information (eg; customer date of birth, eligibility for loans and other credit scoring flags and even the balance of funds held in savings accounts) was presented in every web page delivered after a user logged in. The only technical protection against this information becoming public knowledge was the use of the “secure” HTTPS protocol to deliver the web pages. As described earlier, this is a bank that ignored widely published advice dating from 2010 to stop using SHA-1 based certificates to authenticate and secure web sites (it actually only changed to an SHA-2 certificate on 17 January 2017 – in relative terms minutes before Internet browsers like Firefox and Chrome would have refused to display the bank’s site – even then failing to change to certificates for subsidiary sites – eg; those used to host and deliver the bank’s advertisements and technical assets causing browsers to issue warnings about site insecurity to every CA web-site visitor throughout 2017).

But regardless of the security of the particular flavour of HTTPS used by the bank, from a system design perspective three matters are relevant.

  • First, I can conceive no possible reason for publishing such confidential information within a web page – the bank may want to know the kind of information disclosed so that its staff can try to advise (sell products to) its customers, but it surely has internal IT systems that display that kind of information to its staff.
  • Second, the reliance on a single security layer provides a single point of failure. When technology fails, it tends to fail catastrophically and rapidly.
    • An example is the recent discovery of flaws in the design of almost all CPUs (the ‘chips’ that provide our computer’s processing power) that allow sensitive information such as logins and passwords to be leaked – provoking a massive effort from chip manufacturers, operating system developers and others to mitigate the problems. These flaws have lain undiscovered (and therefore exploitable by who-knows-who) for decades.
    • In similar fashion, the underlying algorithms that secure the HTTPS protocol could fail at any moment. A simple fact. Should such a failure occur – coupled with this system’s appalling authentication methods – customer data becomes exposed unnecessarily.
  • Third – though there is always a temptation to view IT systems and their operation in isolation … in a perfect world in which user only press and click what they are supposed to and computers and logins are never shared – in the messy real world all these factors and more come into play. Here, there has been a number of assumptions that, taken together, can be summarised as HTTPS provides a secure channel between the bank and its customer. It does not.
    • HTTPS (at best) provides a secure channel between a computer server delivering information (a web page) to a remote browser.
    • What the receiving browser and user does with that information once decrypted and displayed is for them to decide. This is not an excuse for systems designers to simply shrug their shoulders and say “Oh well, that’s out of our hands” – it is incumbent on designers to take the WHOLE system – which includes its users and the environment in which it operates – into account.
    • To take just two examples of how personal data might be disclosed. I should say at this point that all of the research I conducted on Credit Agricole was driven by personal interest (I was a customer) and none of the research involved any hacking or attempts to infiltrate the company’s computer systems – all I did was look at the information and documents (web pages) sent to me by the bank. To explain first, on almost any modern browser, typing the key combination ‘ctrl+u’ will open a new view showing the “insides” of the web page you were looking at – revealing all the code, internal and external programs and much else that when taken together “make the page work”. So, anyone can download a web page – hit ‘ctrl+u’ – and see what’s going on. This is schoolchild level knowledge – an IT professional (or a hacker) can be expected to have an entire tool-chest of forensic examination instruments available.
      • Example1: Messy world. A customer sits in his/her office using their break-time to do some on-line banking. While a page (perhaps one trying to sell insurance policies – something the user thinks is innocuous) is open a ‘crisis’ erupts and the customer is called away, leaving the page open. Someone else comes along, hits ‘ctrl+u’ and no amount of HTTPS stops them gaining access to the customers date of birth and savings and loan account balances.
      • Example 2: Technology is sometimes too useful for its own good: Internet browsers cache (technical term meaning “temporarily store”) whole pages and even windows containing dozens of tabs in order to speed display times by not having to download again data that is already available. So … here’s a neat trick. Open a web browser (Firefox, Chrome, Safari …) – open a second tab – within that tab, login to a web-site (CA’s will do) – now (typical user behaviour) do not log out but simply close the tab. Find the particular browser’s “Undo closed tab” (or equivalent) command and re-open the tab you just closed – it doesn’t matter if you open and close 5~6 other tabs and sites between closing the secure page and re-opening them – just keep executing “Undo closed tab” until the one you want reappears. Bingo! The page you were just looking at (perhaps showing bank account balances) will reappear. Hit ‘ctrl+u’ to see what may be “hidden” behind the page. Imagine for one moment that the trickster here was a person of bad intent … see how the system fails?
    • There are extremely simple solutions to both examples I just gave. Number 1 is that old “keep it simple” mantra – again in the form of “if you don’t give it, it can’t be lost”. The example problem is caused by CA inserting data entirely unnecessary to the working of the web page or site into the data sent. The solution is equally simple – send only the minimum data required for the web-site to work – and no more. Number 2 is a simple technical fix – it is possible to tell browsers which data may be cached – and which data must NOT be cached. Use this simple technique and the trick I explained doesn’t work … unless you also forget to limit the lifetime of the cookie that represents the customer’s login session to the lifetime of the window – meaning that the cookie and associated login session is destroyed as soon as the user closes a tab or window. Forget to do this and, regardless of the caching rules set, the browser will simply request data from the (still live) session on the server and redisplay the page. In CA’s case they had neither set caching restrictions nor proper cookie lifetime restrictions allowing the trick I described to be performed by anyone. Who would expect this of an upstanding bank?

I want to continue looking at Credit Agricole – not because I have any personal gripe (though I confess plenty of professional frustration) against the bank nor wish to cause them harm but because their IT systems are like the gift that keeps on giving – a book could be written using just their public-facing systems to illustrate how NOT to design, develop and operate Internet connected IT systems.

In November 2017 I again entered into correspondence with a main-board director of Credit Agricole, wishing on my part to make him aware of the serious flaws in the bank’s IT systems security and potential for unnecessary disclosure of confidential, personal data. That was my wish. You can guess my expectation. And … my disappointment when the letter I received was a boiler-plate text full of platitudes such as “Credit Agricole is constantly strengthening and auditing its information systems …”.

If the constant auditing claim was true then the only conclusion reachable is that either the auditors were incompetent – or their findings and advice had been ignored by the bank’s management for years.

The“constantly strengthening”claim was certainly untrue to my direct knowledge. Having first written to disclose serious system security flaws in 2015 I had seen the bank’s system security worsening over the intervening period. To give just one, final (of many), example of what was wrong my reply contained this paragraph:

Each and every page of the Credit Agricole web site incorporates common program code libraries which are up to NINE years out of date and which contain (in the versions used and in the case of just two of the libraries included in the CA web site) over 9,700 published and documented security flaws. In case you do not understand the implication – when a securityflaw is identified, it is first notified to the maintainer of the code library (as it is in this case) and time is allowed for the security flaw to be fixed. After the time has expired, FULL DETAILS OF THE SECURITY FLAW – INCLUDING SAMPLE PROGRAM CODE THAT CAN BE USED TO EXPLOIT THE FLAW is published. In simple terms, a teenager in a bedroom can look on-line, find and download program code designed to break into Credit Agricole online systems. As of today, your website contains over 9,775 such published security flaws.Think of them as 9,775 open doors into your bank vault through which any criminal could walk and help themselves to whatever they wished.

That’s right – CA hadn’t bothered to update its underlying system software for at least nine years. During which time (all software contains bugs, remember) over 9,700 security flaws had been found, fixed,and published. But all 9,700+ were still open to the winds on the CA web-site.

How difficult is it to monitor such bugs and their impact? The first response is that, if the system is being kept up to date, it should only be necessary as a back-stop – a secondary check that nothing has been missed. Recall the mechanism: security researchers are constantly trying to find bugs in software exposed to the Internet –when a bug is found it is reported to the developer who is allowed time to develop a fix and deploy (at least, offer) that fix to all affected users. Only then are details of the bug published so that other researchers can learn from it. This mechanism and the eventual reporting is a well managed scheme. The bugs are called “CVEs” (“Common Vulnerabilities and Exposures”) and a full, searchable database is available on-line at https://cve.mitre.org/ and at https://www.cvedetails.com/ and at https://nvd.nist.gov/vuln/search and at https://oval.mitre.org/ and at …

Enough bashing of Credit Agricole!

 This article is not an excuse to bash or expose the failings of asingle French bank. Sadly, from experience I could have spread examples of appalling practice across a number of organisations that I have encountered in just the last few years alone.

It just so happens that Credit Agricole provides a case study in how NOT to operate an Internet-facing IT system.

Perhaps the lesson to be taken here is that companies and the directors and senior management must become much more aware of the risks posed by poor Internet Security and the relatively simple steps that can be taken to avoid the majority of security risks likely to expose personal or corporate data.

Wednesday, November 21, 2018

Internet Security – Part 4: Placing responsibility where it belongs

The original article was published at https://ift.tt/2Bo6fYe

Corporate and Board responsibility

Credit Agricole is far from unique in its implementation of on-line security measures that are inadequate by any reasonable assessment. While discussing why breaches occur I suggested that a crude, cold-hearted financial motive lies behind most of the data loss and fraud that occurs on the web.

Simply stated, it is cheaper for an organisation to ignore data security than to incur the costs associated with locking the doors and windows to keep it safe. As episode after episode has shown, even the most massive (and, on a personal level, catastrophically harmful) data breaches that have occurred (eg; Yahoo, Facebook) result in fines that are derisory in comparison to the scale of the companies’ profits. As for civil or criminal prosecution of the companies or their directors for their dereliction of duty – forget it. The legislation isn’t there to support such prosecutions.

OR … it wasn’t.

On 23rd May 2018 a change occurred that has massive ramifications for companies that hold and process personal data. That was the date that the EU’s GDPR (“General Data Protection Regulations”) came into force. Finally, legislation with real teeth exists. Companies that allow or enable data breaches similar to those I have described or companies like Credit Agricole that employ inadequate security and inevitably enable personal data loss can now be fined the larger of 20 million euros or 4% of their worldwide turnover (not profit, their income before expenses).

Companies must now look hard at how they protect their customers from data breaches. The old risk-benefit ratios (which determined it was cheaper to let data be lost and pay for any clean up afterwards) are replaced by the potential of fines that can impose material damage to the bottom line of any organisation – perhaps even do existential damage to (ie; put out of business) the worst offenders.

Itis early days with GDPR. The big tech companies who gain most from harvesting and combining personal data fought hard to stop the legislation coming into being – and failed. So far most have responded to the data privacy requirements of the legislation – in most cases by amending privacy and cookies policies and, in some case, providing some description and control over how personal data can be harvested and used by the sites. I see this as just a misguided attempt by companies whose business modelrelies on the abuse of customer data. It will be interesting to see what happens the first time a Facebook or a Google is confronted by acomplaint of data breach – and faces fines on a scale never before seen. Watch this space with interest.

The situation for companies that have grown up in an environment that allows them to abuse the Internet and its citizens in a fashion akin to the way outlaws in the old Wild West used to terrorise and abuse the citizens of remote towns and communities is changing as users and governments begin to realise the scale of privacy invasion and personal harm that is being perpetrated. Because worse(for those companies) is yet to come. GDPR is only a part of something called the European Data Privacy Framework and, while the details of this legislation this have yet to be finally agreed the legislation is likely to come into force in late 2019. At which point the tables should really be turned and I expect that both companies and the directors who control them will face criminal penalties of sufficient magnitude to make even the most adventurous and care-free among them think twice about their attitude to keeping the doors and windows locked shut.

Internet Security – Part 3: The impact on users

The original article was published at https://ift.tt/2KmFhmH

What does all this mean for users?

I started this article with an example of a fictional – though real-world – bank securing its premises with locks and keys. Round the circle and I have provided an example of appalling systems design and operation with a real bank operating in the virtual world –exposing all its customers to wholly unacceptable security risks and relying on security technology that was at least a decade behind the times.

To all appearances Credit Agricole is a long-standing, reputable bank that operates throughout France and has hundreds of thousands of customers – most of whom I can only assume carry on in blissful ignorance of the bank’s wilful disregard for the security with which it treats their personal data and their money. The simple fact being that it should not be trusted with either personal data or money.

I can imagine a CA customer phoning the bank to report that all their accounts have been emptied or that their statement shows transactions that they have not performed. And I can imagine the bank’s assured reply that it takes its customers’ security very seriously and is entirely content with the security of its systems. The fault must, therefore, lie with the customer.

To any CA customers reading this who find themselves in a situation similar to that I describe do not take the bank’s word as worth a cent. Challenge them to prove that the transactions are due to your actions and not due to the appalling insecurity of the bank’s IT systems and the way it operates them. In short, the utterly disgraceful disregard it has for the customers that feed it.

I feel the need to repeat what I wrote at the beginning of thisarticle. The reality is that nobody can be trusted.

Looking more broadly this lesson applies across the board.

No Internet based service can reasonably be trusted to keep your personal data or possessions safe. Though organisations (eg; thesearch engine DuckDuckGo or the Swiss email provider ProtonMail) are starting to appear that place customer privacy and security above the grasp for naked profit sadly none of these organisations are offering banking or shopping services.

It follows that the prudent user proceeds through the Internet taking the greatest care of the personal data he or she leaves in their wake and – whether you are looking for a place to post your daily activities and innermost thoughts, update your calendar or contacts, deal with your banking or the weekly shopping work on the assumption that whatever data you provide – from your name and address to your credit card details, photos, confidential documents and list of friends and contacts – will become “lost” at some point to the villains that take advantage of the “profitfirst – customers last”culture that drives the design and operation of the computer systems with which you interact.

As a simple example, many on-line retailers ask your permission to retain your credit card details “to make checkout faster in the future”- or some variant. NEVER allow an online retailer to store your payment details – for the simple reason that if they don’t have your credit card they can’t lose it. So when you read that retailers such as eBay (145 million customer credit card records “lost” in 2014), Target (110million), Sony (77 million), Home Depot (40 million) … [the list goes on and on] have lost their customers’ credit card details you should have less cause to worry – as long as you can trust ther etailer NOT to store your card details under some other pretext.

Internet Security – Part 2: How and Why do data breaches occur?

The original article was published at https://ift.tt/2DzwoVG

How and Why do security breaches occur?

Why do breaches occur?

Before looking at how security breaches occur it may be worth askingWHY they are allowed to occur.

If we return to our real-world example of earlier, securing a physical bank is well understood. You fit strong doors and vaults, complex locks, sophisticated alarms and have rigid procedures in place to control who has access to keys with tiered authorities to protect increasingly valuable assets. Such measures are proven to be effective against the“attack surface” presented by a physical bank – essentially, bad people must gain physical access to the bank and its vaults before harm occurs. Some rigorous record keeping and controls keep unauthorised staff away from customers’ personal storage vaults … etc. The security system works well.

Protections like these in the physical world have been in place and well understood for hundreds of years. The concept of keys and alarms and weighty doors is understood by everyone from the office cleaners up to the main board directors.

Turning to the virtual world and a couple of things above all become glaringly obvious.

  • First, the attack surface (analogy: the number of doors and windows that may be accidentally left open for the bad guys to squeeze through) is very much larger (a factor I’ll return to).
  • Second, the way the locks and alarms work are not understood at all by the majority of people who use or rely on them – all the way up to main board directors.

All this is another way of saying that, to most people, the subject of securing a public-facing IT system (bank application, shop, social media site …) on the Internet is frighteningly complex. So complex– they believe –that they simply wash their hands of the matter. Two responses can then occur:

  1. the organisation just ignores the situation, telling themselves “nobody will be interested in us” or,
  2. it hands the problem over to “IT security specialists”.

Response number (1) is nothing more than a dereliction of duty. The fact is that bad people are interested in even small targets (something as small as a home internet router or CCTV camera is of use to a hacker).

(The relatively tiny networks operated by Anadigi (parent organisation of biznik) – one public server cluster that provides our web sites, email, phone and other public-facing provision and the “home office” network that has to deal with everything from email and phone communications through media streaming to CCTV security – fend off several thousand attack attempts each day. If we didn’t use mechanisms that shut out all access to external rogues after their first few attempts at forcing their way in that number would be in the hundreds of thousandseach day.)

Response number (2) is typical of medium to large organisations –the thinking being, hand the job to an external specialist, get a certificate saying everything is AOK and our backs are covered – end of problem. Should anything nasty happen, we can blame the external specialists.

Both responses are wrong in easily understood ways. But they share a common factor – MONEY! The simple fact is that the job of properly securing a publicly facing IT system is time-consuming, expensive and requires both rigorous implementation and operation of procedures – that must be monitored constantly.

Whether expense arises from the need for network monitoring and protection equipment, time spent learning and configuring the new system – or the cost of hiring external “specialists” it needs MONEY. The spending of (relative to any organisation size) significant sums of MONEY.

So, we come to the answer to WHY system security breaches occur. Owners prioritise money generation over money spent on making sure all the doors and windows are securely locked and everything is bolted down tight.

How much easier it is to just look at an IT system as if it were a physical bank. When you build a new one, put in the current “state-of-the-art” security and then set-and-forget while it goes about its work of generating cash for the business running it.

While the design and technology behind physical locks and alarms develops, it does so only slowly. A review every five years or so may be plenty good enough.

In IT circles, five years is the equivalent of several lifetimes. The list of technological threats that arise from software bugs, changes in security or other protocol standards is alone reason enough to require constant training, monitoring and updating of public-facing IT systems.

Companies and organisations pay scant attention to securing their customers’ personal data and possessions (or the personal data they have ‘acquired’ by tracking the user in order to build valuable profiles of the individual) for one simple reason. Until very recently it has been far, far cheaper to pay for a public relations whitewash than spend the time and money ensuring that the design of the IT systems deployed and the way they are operated and the staff that operate them are fit for purpose from a security viewpoint.

Stated simply, it’s cheaper to play security last than security first.

Hence, development staff are pressured to put new “features” in place as quickly as possible without anyone paying attention – let alone conducting oversight – on the security implications involved.

Hence we see Facebook saying “Oops – we didn’t mean that to happen” as time and time again some major breach of customer personal data occurs (eg; the way that Cambridge Analytica obtained personal data on 87 million Facebook users through use of a deceptive personality quiz – which took advantage of an interface Facebook had implemented to make it easier for external developers to create such games without thinking through the security implications).

How do breaches occur?

How many fingers and toes do you have? They’re not enough to countthe ways that security breaches occur. I’m going to concentrate on the three main ways that data “gets lost” from an IT system.Consider the list in order of likelihood.

  1. Obtaining the keys. Either persuading a user to hand over their keys (eg; a “phishing” attack) or using some other method that gets the keys to the safe.
  2. Bugs or flaws in software or hardware. I am constantly amazed at how many companies and organisations fail to apply security patches and other security related updates to their systems – eg; updating to a newer, stronger encryption method long after the one in use has been compromised.
  3. Poor system design. Some systems are so badly designed they may as well hang a sign on the front page that reads “Welcome – Come on in and help yourself”.

Obtaining the keys

In the late 1980s I was at lunch with the Chairman and IT Director of a major UK bank, alongside a colleague who specialised in IT systems security. The bank Chairman brought up the subject of the new IT security measures his company had recently installed and, while his IT Director beamed smugly, proudly announced that his bank’s systems were now “absolutely impenetrable” to outside attack. He claimed it was now impossible for “anyone” to gain access to the bank’s systems. My colleague listened – then offered a wager (a good bottle of wine to the winner) claiming that he could access the bank’s systems within a week.

After much scoffing and guffawing the Chairman said he’d take that bet.

Three days later my colleague hand-delivered an envelope containing the Chairman’s previous six months payslips, the minutes of the Bank’s last three Board meetings, a copy of the list of phone calls made from the Chairman’s office in the past week … and a couple of statements showing the transfer of £10 from the current account of the IT Director to the current account of the Chairman – which my colleague had instigated. Just to prove he now had access to anything he desired within the Bank’s systems.

What high-tech skulduggery had been used to gain such deep inside access?

My colleague had copied some cute pictures of cats on to a couple of dozen USB keys … as well as some simple key-logging software. He then walked round the square outside the Bank’s headquarters one morning scattering the keys on the ground. Human curiosity then took over. Bank workers leaving the building for lunch found the USB keys, took them back to work … and inserted them into their PCs and spent time looking at the cute kitty-pics. Meanwhile, the key-logger software installed itself in the background and, over the next two days, sent enough login IDs and passwords to allow my colleague to access the bank’s systems without breaking sweat.

He enjoyed a very nice bottle of wine. The IT director was (a little unfairly) sacked.

OK – that trick doesn’t work so well in 2018. Current generation PCs don’t automatically run software they find on a USB stick or CD and most corporates are wise enough to bolt down any USB ports to prevent files going in or out of the building that way. Employees can no longer waste time looking at cute kittens – but it’s also impossible to either silently install malware or (as TV shows still pretend) simply plug in a USB key and hoover up all the files on a secure system!

But … the point to learn is that the simplest and easiest way to compromise any security system is to exploit some human weakness or persuade a human to voluntarily hand over their keys without complaint. The modern simple equivalent is some variant on a “phishing” campaign – an email is sent to the target user appearing to come from a valid organisation, complete with logos, signatures, links to help pages etc, telling the reader that their… “account has been compromised and their credentials must be reset” / “mailbox is full and they must login to clear it”…

The smart, trained user knows enough to check the actual sender address and the URL of the link they are asked to click. But, send1,000 emails and the hacker needs only 0.1% of the recipients to click the link and … SUCCESS! Access granted, keys obtained.

Everyone will be well aware of the technique when emails arrive supposedly from organisations with whom you have no relationship containing messages such as those above. Those emails are just sprayed out at random by the million … and they are successful. The response rate needs only be tiny for the hacker to gain access to bank accounts or systems. Phishing emails can be very sophisticated – even IT professionals have been caught out.

Walk around any office and you’ll still see keyboards and cubicles plastered with post-it notes displaying login ID/password reminders that any visitor can easily note or snap with their phone.

There are many variants of human exploit. Phone calls from “the IT Support Team” asking for login details work more often than not – and if that fails, telling someone that new company policy requires them to install desktop sharing software to enable team meetings and remote support usually gets the job done.

Next time you read about a few tens of million user records going missing understand that the most likely background story (that the embarrassed organisation will never admit) is that an employee GAVE the hacker access. Claims that some sophisticated hacking software or a hitherto unknown vulnerability had been used to obtain the data that is lost often hide the fact that some person simply screwed up.

Sometimes there are no keys! Unbelievable?A simple web search will reveal dozens of data losses caused by careless IT staff who (perhaps while maintaining a main system) simply opened another virtual machine instance on a cloud service, copied a backup to that machine … but forgot to password protect the temporary machine.

It doesn’t matter how secure the main system is if the operating staff are idiots who can’t follow basic safety protocols.

And, yes, there are people constantly scanning the Internet …probing for just such openings and looking to see whatever haul of treasure might be lying around just waiting to be picked up.

Easiest way to breach a computer system? Ask a human nicely … or just wait until they slip up.

Bugs or flaws in software or hardware

It’s true. All software contains bugs (flaws or mistakes) in the code. Even the most fundamental pieces of software in any system (say the network communications protocol) is subject to flaws – some of which lie undiscovered for years.

Most of these flaws go unnoticed and unexploited as they are fixed before news of their existence comes to light. So … as long as all machines in a system are kept up to date things are fine. Right?

Sadly no. Teams of researchers working worldwide constantly try to find bugs or new ways to exploit hardware and software. Where a workable exploit is found, the developer or manufacturer is informed and given time to correct the problem and distribute the resulting patch before the exploit details are published.

But sometimes, workable exploits are discovered by equally skilled and diligent bad hackers – and turned into malware that is then spread as fast as possible so as to infect the maximum number of devices before the flaw can be patched. These so-called “zero-day” exploits can affect any IT system connected to the Internet and are the basis of many “state sponsored” hacking exploits. Little can be done to defend against such flaws (apart from following general good system operating practice that denies access to everything other than the parts of the system that have to be exposed in order to deliver services) but, that said, I strongly suspect that the number of times “state actors” are blamed for some huge (and potentially embarrassing and costly) data loss are far greater than the number of  times any state actor worth its salt doesn’t just take advantage ofa human exploit or a system that hasn’t been kept up to date.

Poor system design

The final category I want to discuss is poor system design. As a system designer of many decades standing I find this unforgiveable.

I have already discussed the way that Facebook has allowed third parties almost unfettered access to its customers’ personal data –all without their knowledge or permission, to be used in ways that I suspect no customer would agree to. I could discuss Yahoo – or Google – or Ebay … or any of the hundreds of personal data holding companies whose lax security has led to the loss of personal “keys” and confidential information over the years.

Instead, I’d like to look at an example of how bad systems design married to appalling operating practice and the wilful ignorance of main board directors charged with looking after trivial matters like customer security pans out.

Let’s look at French bank Credit Agricole (“CA” @CreditAgricole). Users of its on-line banking service are asked to log in with

  • their full 11-digit main account number as ID
  • a self-chosen 6-digit password

So, what’s wrong? Compare this to a more typical on-line banking system where we might see:

  1. A customer ID (login ID) issued by the bank that bears no relationship to the customer’s account number or other information that might become publicly available. In other words, there is no way to convert or discern the account number from the customer ID or the customer ID from the account number. The customer ID is known only to the customer.
  2. A password usually comprising at least 8 characters that may be a mix of (again, at least) digits and letters. Again, this information is known only to the customer.
  3. A “memorable word” of arbitrary length and complexity. Once more, this is known only to the customer.
  4. At login, the bank asks for the full customer ID followed by a random selection of characters from the password (say, the 3rd, 6th and 9th characters) followed by another random selection of characters from the memorable word (eg; the 4th, 8th and 12th characters).
  5. NOTE: None of the login information is either public (eg; an account code) nor can it be derived or guessed at from publicly available information. Then, only a small part of the password and memorable word fields are requested – NEVER the full code strings – in this way, even if some bad actor manages to listen in to the (properly encrypted) data stream (or use key-logging, session replay or some other malware to watch) they never gain access to the full login details as each subsequent login attempt will ask for a different random combination of two separate secrets. Even if the encrypted communication line is broken, a criminal would have to monitor dozens or hundreds of logins to harvest the full password and memorable word data.
  6. The best use 2nd factor authentication (“2FA”) – a mechanism which requires a user to have a second, usually physical, means of identification before gaining access to an account. A simple example is where a web-site sends a one-time code to your mobile phone which you must then enter into the login screen … correct entry of this access code ‘proves’ you have access to the phone so acts as a secondary check that you are authorised to login. Better is a physical “key” (eg; a USB key pre-programmed with your secret, encrypted identity).

Now look at CA. Everybody who receives a cheque or direct debit instruction or just an electronic payment from a CA customer knows the customer’s account number – therefore immediately knows their on-line login ID. That’s the first half of the problem out of the way then. We are already 50% of the way towards gaining full access to a CA customer’s bank accounts. And, all we’ve done is read something the target customer sent us!

Second, at each login, the full 6 digits of the password must be entered. That’s all 6 digits in the order they were chosen originally. So, if a user chooses “123456” as their secret password then they must enter “123456” every time they want to login.

The bank does not allow letters, punctuation or anything other than the digits 0-9 in the password.

But CA offers no 2FA and requires that all 6 digits of the passwordare entered on each and every login.

So, the only thing standing between a criminal and a CA customer’s bank accounts is a 6-digit code.

6 digits provide 1 million possible combinations so somebody has obviously decided that correctly guessing the customer’s code is all but impossible.

Ignoring the fact that many customers will simply choose something easily memorable – their or a close relative’s birthday (eg; “110263”) which is a number easily obtained from public records (or any of hundreds of prior data breaches) and probably brings the number of attempts necessary to crack many accounts down from 1,000,000 to … just 2~3!

So, if I’m a hacker and prepared to do a little research (eg; look up the target customer’s Facebook profile) I need only receive a cheque or payment from the customer to access their bank accounts within (say) 3 attempts. Of course, as a hacker, I probably already have the Equifax sourced data dump (or several of the dozens of similar personal data sets “leaked” by companies worldwide) so I may not even have to visit Facebook to find my target’s birthday or family data – I just need to look them up in the data I already have.

Being as polite as I can be, the CA on-line authentication system is not covered by my definition of adequate security.

What else could possibly go wrong?

Modern browser technology for one. Many modern browsers are configured ‘out-of-the-box’ to offer the convenience of remembering data entered into a web form. Firefox (possibly the most secure mainstream browser currently available) will happily remember and fill in the 11-digit account code field of CA’s login screen. Use a shared computer or login – as many people do at work or at home and it’s not even necessary to receive a cheque or some other transaction to learn the target customer’s login ID – just let the browser fill in the field for you.

Then there is technology currently much beloved of web marketing departments – so much that it has even been seen deployed in the wild by banks – a technology called “session replay”. This is delivered silently as an obfuscated script (ie; a piece of code scrambled so that no normal user would possibly be able to guess what is happening and even professionals have a hard time unscrambling it) embedded in a web page. Working unseen in the background a session replay script can capture every key-stroke, mouse movement and click – including a full ‘live’ screen image that is sent back to a third party’s server allowing anyone with access to that server (eg; a bank employee or an employee of the company providing the session replay recording script … or anybody that has access to the server the data is transmitted to) to replay the user login session – revealing the full 6-digit “secret” code – that only the customer is supposed to know, not even to be revealed to bank staff.

It gets worse. A CA customer calling the bank telephone line is greeted with a recorded message …asking them to key in using the phone’s keypad the full 11-digit account number AND the full 6-digit “secret” access code!!! The SAME login credentials used to access the on-line banking system.

Modern phones are full of user conveniences. Such as “Last Number Redial”. So imagine, somebody calling CA from an office phone during their break. They keyin their full credentials and in return get greeted by name. What a wonderful service. They finish their banking and go back to work. Along comes someone else … who hits “Last Number Redial” and, on many office phones, they will see the full CA telephone number followed by 17 digits of “secret” login credentials – the exact same key used to access the customer’s accounts on-line. Our criminal doesn’t even have to risk talking to a bank staff member (who may recognise the different voice) – they simply move to a computer, load up the CA website, hit login and enter the codes. Voila! One set of bank accounts hijacked.

Even a phone that doesn’t remember digits keyed in during a call (so will not play them back later) usually still displays them as they are keyed. Busy workplace, customer keys in their “secret” code while somebody stands behind with a camera phone ready to snap or video the phone display. Again, bank accounts hijacked.

And let’s not forget that most company-operated telephone switches (what used to be called a “PBX” but is these days just another computer routing traffic between phones) record all key presses for audit purposes. So … there’s another potential pool of happy CA account breakers – anyone with access to the company phone server log files or call records.

Having raised this security issue in writing with directors of Credit Agricole over three years ago – including a full, keystroke-by-keystroke description of the many ways the bank’s on-line security could be easily breached (there are several others I could trivially think of) I received a response that the bank takes customer security very seriously and is entirely happy with its anti-fraud measures.

My mind boggles.

Just to add to the fun, the bankrelies on HTTPS (the supposedly secure and encrypted web page communication protocol) to ‘ensure that customer communications are secure’.

Problem. CA continued to use a web signing certificate (the mechanism that underpins the entire encryption between user and bank) based on an SHA-1 encryption standard certificate until 2017 – just before all major web browsers refused to connect to websites still using SHA-1encryption.

The reason the browser companies decided to refuse SHA-1certificates:

It was first shown as a proof-of-concept as long ago as 2005 that SHA-1 encryption could be broken and Google then released in 2010 a definitive proof and published working program code showing that the algorithm was broken leading to a ban on use of SHA-1 certificates by US federal agencies since 2010 followedby a recommendation that any organisation still using SHA-1 should update their certificates immediately and, finally, an outright ban on the issue or renewalof SHA-1 based certificates in January 2016.

So that’s just 12 years that CA exposed its customers to insecure communications, just 7 years since anybody with sufficient computing resource available could use Google’s published program code to impersonate CA’s websites and over a full year after the standard was outlawed from the web.

But the bank remained entirely content with the situation – until confronted with the reality that their web-site would effectively disappear from the web inJanuary 2017 as no browser would talk to it – because on a very fundamental technical level the site could not be trusted.

Generic design flaws

In all cases data breaches and personal data loss could easily be mitigated.

To take a simple example – the system needs to retain credit card or other confidential payment details. This may be necessary either because the credit card payments will not be taken until the goods are shipped at some later point in time and there may be legal and accounting regulations that require payment details to be kept.

Fine– there’s little that can be argued against a legal requirement to retain even sensitive data.

However … the way in which that sensitive data is stored makes an enormous difference. A little secret – encrypting a piece of data (credit card details, phone number, social security ID – even your name, address and phone number) requires less than one line of program code. In plain text the code changes from this:

PUT “credit card” in database

to

PUT encrypt(“credit card”) in database

where “encrypt” is a program function that performs the actual work of encrypting the data – in this example a credit card number. The“encrypt” function itself can use any of numerous, modern, secure methods to scramble the data. The best use a secret key – known only to the organisation processing and storing the data (er, just like our real-world bank having a key that opens the door to the bank) that is stored well away from anywhere Internet accessible.

So, a simple change can safeguard personal data. To the effect that even if some idiot leaves a backup of a couple of billion customer records on an unprotected server exposed to the Internet crooks and spies are welcome to come take a copy – and, unless they also obtained the secret key, spend the next few million years trying to unscramble just one customer record.

Given that the process of safeguarding personal data through use of encryption is so trivially easy we need to ask why companies that expect us to trust them with our data don’t do it (click on a few of the blobs on the informationis beautiful chart to see how many times personal data has been stored in clear, plaintext form – ie; not encrypted at all).

The answer once again comes back to MONEY. In simple terms, encryption uses processing resource.If a company encrypts large amounts of data that requires it to use –hence pay for – bigger or more computer hardware and probably larger storage as well. So, it would cost – in relation to the turnover and profit of any arbitrary organisation processing personal data – a little bit more to ultimately safeguard that data.

That is how much regard and respect the companies revealed on the informationis beautiful chart– and the thousands more that aren’t (yet!) on the chart – show to you and your most personal and confidential information.

Your custom and your data is worth billions of £/$/€ to them but while the cost of putting a trifling few billion people at risk of fraud and worse remains less than the cost of saying “Oops – sorry – my bad!” they will continue to run their IT systems with the minimum amount of hardware and pocket the savings that data encryption would cost.