Thursday, November 22, 2018

Internet Security – Part 5: Installing better locks

https://ift.tt/eA8V8J
The original article was published at https://ift.tt/2PJjlZ0

What can be done to improve security?

So far this article has focused on the problems with the security of Internet facing IT systems. I hope it is obvious to all that much needs to be done to improve the entire landscape from the way that users authenticate their access to sensitive systems through to the responsibilities of companies and organisations to respect and safeguard personal data and possessions with which they are entrusted.

So, the truth is that much can be – and arguably should ready have been – done to improve data security on the Internet.

When reading the outlines below, please bear in mind that the information is presented back-to-front. The reality is that no single software developer – or even a company’s IT director – can implement a “security first” systems design and operation approach as is needed to deliver secure systems operable on the Internet.

The impetus and instruction for the changes necessary to improve the security landscape must come from the top. For smart companies that means the boards and directors taking the time and trouble to learn this “difficult techie stuff”to at least the level necessary to know the questions to ask and how to evaluate the responses received. Personally, I believe there is commercial advantage to be had to companies that can promise true security to their customer base.

Ultimately, the final top-down effectors of change are governments, both national and supra-national. If companies and organisations fail to get their acts in order to prevent the current volume and scale of abuse of personal data seen on the Internet then they must expect that laws will be enacted to enforce the necessary changes. Complaining about such legislation has all the effect and moral rectitude of arguing that of course you don’t have a driving licence after being involved in a traffic accident – after all you are entirely competent to decide for yourself your competence to drive a car as other road users reasonably expect. In the same way, arguing that you should be left alone to safeguard and do whatever you want with your customers’ personal data carries no more weight.

In simple terms, change must be driven from the top. In an ideal world, company boards would become proficient in IT matters and enact the necessary changes. In the real, practical world we inhabit I suspect that companies will only change when external forces – market expectations or a legal framework carrying truly painful penalties – force change.

Technical changes

The design, construction, deployment and operation of commercial IT systems is such a diverse and complex area that it is impossible to do more than cover some general principles in an article such ast his. There is, I believe, one principle that should be kept in mind by anyone responsible for the design or implementation of a complexIT system – that principle is the concept of control.

What do I mean by “control”? Simply the degree to which you can retain control over components of the system being deployed. There is a modern trend to treat complex IT systems as mere collections or assemblies of building bricks – the idea being that a complex system is built by combining the “best in class” component specialist sub-systems.

So, we see systems constructed out of some supplier’s accounting system, another’s inventory system, another’s customer support system, another’s marketing and customer relations system, an off-the-shelf customer engagement system based perhaps on a blogging platform “repurposed” for the need …. and so on.

Systems that are cobbled together like this represent the worst examples of expediency over long-term usability and control. Actually more expensive to put together (every component must have individual interfaces to other components with which it must share data or processes). Data is spread around like confetti between different, competing databases – each of which must be updated and kept instep with one another in real-time if anarchy is not to quickly arise. More purpose-written processes to develop, test and maintain. In production, such systems quickly become a nightmare to maintain as different components develop and require interface changes to schedules determined by their developers – the alternative being to continue use of now outdated versions with the security and reliability implications that involves.

It should be obvious that the costs and complexity of managing such systems destroys reliability while harming control enormously.

Put simply, organisations that use this approach to complex systems have no effective control – they are entirely reliant for the reliability and security of the overall system on the individual component suppliers – and their own ability to keep pace with changes required to the purpose-written interfaces between the components.

A simple piece of advice – understandable by the least technically proficient company director – complexity is the enemy of control. If you seek a system whose security and reliability you can control –keep it simple.

System design

The lesson here is simple – the design of systems must be driven by a “security first” culture. Every function, operation and change to the design of the system must be subject to consideration of its impact on the security of the system and the data it holds and processes.

It is important to understand that prioritising security need not negatively impact either the usability of the system or its flexibility and ability to adapt to an organisation’s developing needs.

All design / change decisions must pass a security review to assess what impact may occur to either/both the attack surface of the overall system and the potential to open exploits to directly affected system parts or – indirectly – by granting access to other privileged data or functions. Only after this security assessment is successfully passed should the design be passed forward for development.

Such an assessment should never allow an authentication system such as that deployed by Credit Agricole to ever be developed, let alone deployed for live use. Equally, had Facebook conducted full and proper assessments of the developer interfaces it introduced (driven by the desire to gain more users by enabling outside developers to develop widgets that would drive user engagement on the site …which inevitably involved exposing at least some user data to the interface) the ability to abuse the interface to elevate permissions in order to gain access to personal data sets that were not part of the intended design should have been recognised and the detailed design altered to prevent such abuse before development began. A day of consideration and reflection saves weeks of a CEO having to testify before government committees and courts.

When considering the security implications of any design decisions itis essential that he normal developer mind-set of “this will be great/exciting/fun” must be put aside in favour of a mind-set that puts the assessor in the mind of someone trying their hardest to attack or abuse the system. While the role requires a level of technical proficiency at least as high as that of a good developer the role should go to someone versed in “white hat” hacking –ie; trying to penetrate a system in an ethical manner in order to identify and report flaws capable of exploit in order that they can be addressed before being released to public risk.

System change management

All systems must be capable of change and adaptation over time to continue to meet the evolving needs of the organisation they serve.

However it is vital to recognise that the area of system change presents perhaps the greatest risk to the reliability, safety and security of a running IT system.

I have seen organisations where the marketing department has been allowed to simply demand that the IT department (and its system developers) implement or install some new function or technology that the marketing department “needs” – usually right now! The result quickly becomes a web site full of third party originated scripts whose working nobody can explain and after only just a few weeks nobody can remember why they were implemented or what purpose or internal process within the organisation they enable. To say that this way of working presents an enormous security risk is an understatement of such proportions I shouldn’t need to be writing it. Yet the practice can be seen plainly by anyone capable of calling up the code behind any web page and scrolling through counting the number of such scripts and odd code included. In the worst cases I have seen organisations that have deployed session replay technology (the most invasive personal spying technology currently easily available to web site operators) presumably because the marketing department “needed” to see how users were interacting with the website … quickly implemented without a single thought given to the harm caused to site visitors and the exposure to legal risk the organisation was being put to.

No matter how great the “need” or how urgent the desire to support / release some new product or service might be, system change is system change. ALL system changes must be subject to the “security first” mantra and be put through the same assessment process as any other change proposed to the system.

System operation and management

As already described, having even a perfectly secure system design gains precisely nothing if the system is then operated in a slap-dash way.

Many people (board directors, time to perk up – I may have you in mind here) fail to realise the amount of work required to keep a system up and running on the Internet 24/365. The absolute need for company boards and directors to understand how these complex IT systems work is covered below. For now just understand that even a simple website compromises an underlying operating system, web server software, database management software, and a panoply of web, server and client-side programming languages – oh, and let’s not forget the code that actually makes a web page display on a web-site visitor’s device.

All these layers of software and data are individually complex and all must be regularly backed up, maintained, occasionally restored both as individual components and as a whole.

When, say, the underlying operating system must be upgraded (see the section “System maintenance” that follows) the likelihood is that the server running the entire system must be rebooted –meaning that the website goes “off air” while the machine is restarting and, more importantly, any customers or visitors using the site will experience service interruption. In practice modern websites do not run on a single machine but on closely connected clusters of (usually virtual) machines that distribute the load placed on the service being provided, place sensitive services (such as the database) inside a network inaccessible from the Internet and allow individual machines to be updated or otherwise serviced without interrupting continuity of service to visitors.

It is important to understand that the majority of Internet services (eg; web-sites as being discussed here) are actually run on “virtual machines”. A virtual machine (“VM”) is simply one instance of a“machine” that is running (usually) alongside other virtual machines on a single physical computer server. Several technologies (which are outside the scope of this article) can be used to deploy virtual machines but the benefits of VM technology are several and bring many benefits including:

  • Cost reduction – a single physical server can run a number of virtual machines and make better use of the processing power available
  • Backup – the “state” (ie, a snapshot of everything the VM is processing and all the data it possesses) can be taken in a very short time and without interrupting the machine’s operation
  • New VMs can be created or destroyed at whim – if more processing power is required for example or a replica of a machine is required to test an upgrade or other new software a new VM can be created or copied very quickly – then used and destroyed when it is no longer needed
  • The individual functions required to operate a website (eg; a web server and a database server) can be placed on separate VMs and segregated so that access to the web server is possible from the Internet while access to the database is only possible from the internal network – helping to prevent bulk data loss or breach
  • In case of physical hardware failure or upgrade need the VMs running on that piece of hardware can be quickly or even seamlessly be moved to another physical machine allowing the first to be maintained

Like all technologies VM technology can be a double-edged sword. The same ease that allows a VM to be replicated, backed up or new VMs to be created at whim if not properly managed can lead to major problems.

Many cases of software update or change involve structural changes to configuration files or entire database structures. One of the (so far unsaid) benefits of VM technology is the ability to easily roll back a system to an earlier state – something that may become desirable in the even of data loss, system corruption or upgrade failure for example. But sometimes this alone is not enough to allow a security blanket in case something goes wrong during or after an upgrade. Take the example of a software upgrade that requires structural change to a large database. Sensible practice would be to take a complete copy of that database before making the structural changes – that way, should the upgrade fail in some way the original database can be restored and the state of the VMs that use it can be rolled back, restoring the use of the system while the problem of diagnosing and correcting what went wrong can be dealt with.

The next thing to understand is that many of the physical servers that run all the VMs are no longer present in data centres on an individual company premises. With the rise in popularity of cloud computing the likelihood is that these VMs exist “in the cloud”. In real word terms this means that a VM may as easily exist in a datacentre in Phoenix as Strasbourg, Dublin or London.

Data storage can also be virtualised in a similar way to a machine. So that database – which might occupy several Terabytes of storage is just another instance located somewhere in the cloud. And whereas in the days of real machines and proprietary company data centres a company would have to invest in some multiple of the actual storage capacity it needed to operate its IT services in the age of the cloud if a quick copy of a database is required during a system upgrade the answer is to simply create (a possibly better term here is “rent”) another storage instance of the required size, duplicate the data tot hat and just destroy it as soon as it is no longer needed – much more cost-effective.

Here comes the cutting edge of the sword. If the operator who creates the new storage instance omits to secure it (by setting passwords and ensuring it is inaccessible from the Internet for example) … then the entire database contents become publicly available.

As organisations right up to the super-secret American NSA have discovered to their cost this kind of simple, human error makes all the work put into securing operational systems worthless as the data is gratefully hoovered up by “the bad guys” in an instant. And a major data breach just occurred.

System operation and management – the right way

Having looked at how modern web sites or services actually exist and are delivered in practice and having identified some of the horrific consequences of making simple human errors how does an organisation preserve its security and protect itself from such horrors?

The answer is to go back to lessons I learned at the dawn of commercial data processing when giant mainframes the size of power sub-stations (but with less processing power than the phone in your pocket) ruled the computing universe.

Answer: Procedure manuals and check-lists.

It really is that simple – backed up, of course, by management oversight that ensures those procedures and check-lists are followed and adequately recorded to enable every process to be audited … and a corporate policy enshrined in employment contracts that defines failure to follow a defined procedure and rigorously complete each check in an auditable manner as an example of gross misconduct possibly leading to demotion or instant dismissal.

If that sounds simplistic and drastically harsh may I suggest you spend a day as a ‘fly on the wall’ inside a modern IT operations centre – and observe the young techies fingers fly over keyboards connected to multiple machines behind which lie dozens or hundreds of VMs gleefully displaying their advanced skills by “spinning up”and destroying VMs at whim and moving major datasets around with no more effort than offering a toffee from a jar.

All unrecorded and all subject to a single finger-slip that either destroys the live dataset – or puts a copy out on the Internet with no protection at all.

The experience should suffice to illustrate the need for procedure manuals, check-lists and good old-fashioned accountable management.

At the same time, adequate emphasis must be given to staff training.

  • Training on the essential need to follow procedures
  • Training on understanding the relationship of trust between an organisation and its customers – and just how important and valuable that trust relationship is to both parties
  • Training on adopting a “security first” culture – to both customer data and the organisation’s own data. No organisation should have a single member of staff that might fall for a human exploit – whether fake email phishing scam, phone call from IT support or click of a link on a dodgy (or, sadly, even reputable) web site.

System maintenance

I understand and support a company’s decision to base its services on an enterprise class operating system (such as Redhat Enterprise Linux or its open source variant Centos) rather than a leading edge, frequently updated variant such as Fedora, I do not understand corporate IT policies that insist that every little operating system patch must be subjected to its own extensive testing before installing it on their live systems. Do they not understand that (especially in the open source world based around Linux) every patch and software update has been exhaustively tested thousands of times on thousands of different hardware configurations before it hits the live release channel?

In these times of increased security bugs and flaws – never mind the fact that other software that may be updated contains bug fixes and performance improvements this behaviour makes absolutely no sense.

For over a quarter of a century I have run a mix of enterprise class and leading edge systems. All my machines are updated at least once every day as new software updates become available. In over 25years I can recall two instances where a nightly update has caused a problem. I defy any corporate entity that operates a deferred patch deployment policy to tell me they have suffered as few outages over a similar time period.

Simple fact: by the time a security patch is released knowledge of the underlying bug is already public knowledge and hackers will be busy trying to find machines that have yet to be patched. Remember those tens of thousands of attack attempts our little network see off each day? They represent hackers trying to probe our network and servers for known vulnerabilities – windows and doors that have been left unlocked.

On a balance of risk-benefit the benefit is clearly in favour of installing at least security related patches as soon as they become available. Not to do so invites systems to be compromised, corrupted or taken over by malicious actors.

Keeping a system up-to-date is only part of the maintenance job. An equally important function is …

Keeping abreast of the technology

As the CA example shows it’s just not good enough to install the current “best-in-class” security and forget about it.

The technology and IT security landscape is constantly changing.

As an analogy, developing and operating a secure web-site today is more like building a physical bank – then knocking it down to rebuild a more secure version every few months. That is assuming that steps have been taken to avert urgent threats as soon as they arise.

An essential requirement of maintaining a complex commercial web-site is to remain abreast of the technology and threat landscape to ensure that the web-site remains both relevant to its users and its owner and as safe as it can possibly be from external attack.

Any company director, banker or accountant should be familiar with the concept of depreciation – in short, the recognition that any asset has a finite lifetime by which it must be replaced if the job it has been doing is to continue. IT systems are not immune from this concept. Yet companies insist on treating their IT systems – in particular their Internet facing systems – as if they were old fashioned bank buildings. A construct that is built and then expected to last 50 or 100 years with only a new lick of paint every 5 years,a change of locks every 10 and a new vault every 20.

Modern computer systems depreciate (their fitness for purpose) far more rapidly. Companies need to learn that their Internet facing IT systems should be completely rebuilt at intervals of between 3~4years within which period they must be constantly updated to remain safe and secure.

In the past ten years (a period which has seen no major improvement in physical door-lock technology that I am aware of) has seen

  • the fundamental way that Internet users view and interact with web-sites shift from desktop (or large screen) devices to new form factors – to the point that today most web-site visitors view those web-sites through tiny mobile phone screens – in portrait format rather than the landscape format of old. We are now seeing the rise of voice technology. Soon, users will expect to have no screen or display at all – they will simply ask a device in the corner of the room (or their car, their watch or their house …) to tell them their current account balance or to transfer money to pay a bill. The technology to do this is available now – the challenge for systems designers is to deliver the desired functionality while maintaining watertight security and proof of identity.
  • FAR, FAR more important changes happen on the technology plain.
  • Advances in computing power have meant that security algorithms and mechanisms that were considered safe a decade ago can now be cracked in a few seconds. Still the second most commonly used password in use is simply “password” (the most common in 2017 was – if you can believe it “123456”!!).
    • If encrypted using the still commonly used MD5 algorithm (supposedly one-way – ie; once encrypted with MD5 it should be impossible to decrypt the original text) “password” is cracked in less than 3 seconds (most of which is the time it takes to communicate back-and-forth with the website over my satellite based Internet connection) using the website https://crackstation.net/ – if you’d like to try yourself, the MD5 ‘hash’ (the encrypted form of “password”) is 5f4dcc3b5aa765d61d8327deb882cf99. Even a highly uncommon password within which certain letters had been replaced with numbers was cracked in no more time. In passing, this is because so many websites have leaked so many passwords alongside their encrypted hash values – so, saying that this website is ‘cracking’ the password is not at all true – it is simply looking up the hash value in a database that contains previously obtained hash values and their original values. But even where no database entry is available, current computer hardware can actually crack an MD5 encrypted password within times ranging from a fifth of a second to around 20 minutes for a complex, 12 character password.
    • Information on the current best-in-class encryption algorithms and even instructions on how to implement them with most common web programming languages can be found at https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet – so there really is NO excuse for not implementing or updating better security. Most encryption methods used to store login credentials are of the one-way kind – ie; the original password or other encrypted text cannot be recovered from the encrypted value as there is no key. Instead, at login, the credentials entered are encrypted anew and the encrypted values are compared to the stored values. The “penalty” when a one-way encryption algorithm is changed is that, as there is no way to recover a user’s original plain-text password, the user must be asked to choose a new password. This is good practice in any circumstances and allows the requirement for complex passwords to be imposed at the same time. For sensitive sites (such as on-line banking) this is far from an unreasonable step to take – far more reasonable than allowing the site to remain vulnerable where user credentials have been exposed by data breach (even by another site – users, despite all the warnings, will insist on using the same password to login to multiple sites) or the algorithm currently in use is likely to become unfit for purpose within a year or two.
  • The language in which web pages are written (HTML) is now at version HTML5 and operates in a totally different way from the version in use a decade ago.
  • The threat landscape (the number of “doors” through which an IT system might be attacked) has grown exponentially. Coupled with the equally exponential number of mechanisms “lock-picks”) that might be used the entire structure of a commercial web-site and the way its components are opened or restricted from the web has to be rethought and systems redeveloped to take these factors into account.
Simple steps to maintain system security

In summary there are just a few simple steps needed to maintain the security of an Internet facing IT system:

  1. Apply software patches and upgrades as soon as they become available.
  2. Keep up-to-date with changes in at least the key technologies used to defend the system and its users from attack – at minimum these changes include encryption algorithms, programming languages, database engines and other key components.
  3. Regularly check all third-party scripts that are embedded in delivered web pages. Simple rule – the more third-party scripts you allow into your system the less control you have over its security and the safety of your customers – as you never know when a third-party changes their script to do something undesirable or is itself hacked with the effect that you now have a criminal exploit running inside your system.

Vulnerability assessment and Penetration testing

Good systems design, excellent operation and regular maintenance are not the end of the task list for operating a secure system. I have omitted testing as (I assume) everybody knows that software and configurations must be rigorously tested before being put into live production use.

All sensitive systems should be subject to constant vulnerability assessment and penetration testing by teams properly qualified to perform the roles. The aim should be to discover “open doors and windows”, bad design, inappropriate or improper use of external scripts or third-party supplied program code etc. Brief description of the aims and purposes of such testing can be found athttps://www.thesecurityblogger.com/defining-the-difference-between-a-penetration-test-vulnerability-assessment-and-security-audit/and here https://en.wikipedia.org/wiki/Penetration_test.

To understand why I will return again to Credit Agricole. In November2017 I had cause to write to the bank after discovering that confidential, personal information (eg; customer date of birth, eligibility for loans and other credit scoring flags and even the balance of funds held in savings accounts) was presented in every web page delivered after a user logged in. The only technical protection against this information becoming public knowledge was the use of the “secure” HTTPS protocol to deliver the web pages. As described earlier, this is a bank that ignored widely published advice dating from 2010 to stop using SHA-1 based certificates to authenticate and secure web sites (it actually only changed to an SHA-2 certificate on 17 January 2017 – in relative terms minutes before Internet browsers like Firefox and Chrome would have refused to display the bank’s site – even then failing to change to certificates for subsidiary sites – eg; those used to host and deliver the bank’s advertisements and technical assets causing browsers to issue warnings about site insecurity to every CA web-site visitor throughout 2017).

But regardless of the security of the particular flavour of HTTPS used by the bank, from a system design perspective three matters are relevant.

  • First, I can conceive no possible reason for publishing such confidential information within a web page – the bank may want to know the kind of information disclosed so that its staff can try to advise (sell products to) its customers, but it surely has internal IT systems that display that kind of information to its staff.
  • Second, the reliance on a single security layer provides a single point of failure. When technology fails, it tends to fail catastrophically and rapidly.
    • An example is the recent discovery of flaws in the design of almost all CPUs (the ‘chips’ that provide our computer’s processing power) that allow sensitive information such as logins and passwords to be leaked – provoking a massive effort from chip manufacturers, operating system developers and others to mitigate the problems. These flaws have lain undiscovered (and therefore exploitable by who-knows-who) for decades.
    • In similar fashion, the underlying algorithms that secure the HTTPS protocol could fail at any moment. A simple fact. Should such a failure occur – coupled with this system’s appalling authentication methods – customer data becomes exposed unnecessarily.
  • Third – though there is always a temptation to view IT systems and their operation in isolation … in a perfect world in which user only press and click what they are supposed to and computers and logins are never shared – in the messy real world all these factors and more come into play. Here, there has been a number of assumptions that, taken together, can be summarised as HTTPS provides a secure channel between the bank and its customer. It does not.
    • HTTPS (at best) provides a secure channel between a computer server delivering information (a web page) to a remote browser.
    • What the receiving browser and user does with that information once decrypted and displayed is for them to decide. This is not an excuse for systems designers to simply shrug their shoulders and say “Oh well, that’s out of our hands” – it is incumbent on designers to take the WHOLE system – which includes its users and the environment in which it operates – into account.
    • To take just two examples of how personal data might be disclosed. I should say at this point that all of the research I conducted on Credit Agricole was driven by personal interest (I was a customer) and none of the research involved any hacking or attempts to infiltrate the company’s computer systems – all I did was look at the information and documents (web pages) sent to me by the bank. To explain first, on almost any modern browser, typing the key combination ‘ctrl+u’ will open a new view showing the “insides” of the web page you were looking at – revealing all the code, internal and external programs and much else that when taken together “make the page work”. So, anyone can download a web page – hit ‘ctrl+u’ – and see what’s going on. This is schoolchild level knowledge – an IT professional (or a hacker) can be expected to have an entire tool-chest of forensic examination instruments available.
      • Example1: Messy world. A customer sits in his/her office using their break-time to do some on-line banking. While a page (perhaps one trying to sell insurance policies – something the user thinks is innocuous) is open a ‘crisis’ erupts and the customer is called away, leaving the page open. Someone else comes along, hits ‘ctrl+u’ and no amount of HTTPS stops them gaining access to the customers date of birth and savings and loan account balances.
      • Example 2: Technology is sometimes too useful for its own good: Internet browsers cache (technical term meaning “temporarily store”) whole pages and even windows containing dozens of tabs in order to speed display times by not having to download again data that is already available. So … here’s a neat trick. Open a web browser (Firefox, Chrome, Safari …) – open a second tab – within that tab, login to a web-site (CA’s will do) – now (typical user behaviour) do not log out but simply close the tab. Find the particular browser’s “Undo closed tab” (or equivalent) command and re-open the tab you just closed – it doesn’t matter if you open and close 5~6 other tabs and sites between closing the secure page and re-opening them – just keep executing “Undo closed tab” until the one you want reappears. Bingo! The page you were just looking at (perhaps showing bank account balances) will reappear. Hit ‘ctrl+u’ to see what may be “hidden” behind the page. Imagine for one moment that the trickster here was a person of bad intent … see how the system fails?
    • There are extremely simple solutions to both examples I just gave. Number 1 is that old “keep it simple” mantra – again in the form of “if you don’t give it, it can’t be lost”. The example problem is caused by CA inserting data entirely unnecessary to the working of the web page or site into the data sent. The solution is equally simple – send only the minimum data required for the web-site to work – and no more. Number 2 is a simple technical fix – it is possible to tell browsers which data may be cached – and which data must NOT be cached. Use this simple technique and the trick I explained doesn’t work … unless you also forget to limit the lifetime of the cookie that represents the customer’s login session to the lifetime of the window – meaning that the cookie and associated login session is destroyed as soon as the user closes a tab or window. Forget to do this and, regardless of the caching rules set, the browser will simply request data from the (still live) session on the server and redisplay the page. In CA’s case they had neither set caching restrictions nor proper cookie lifetime restrictions allowing the trick I described to be performed by anyone. Who would expect this of an upstanding bank?

I want to continue looking at Credit Agricole – not because I have any personal gripe (though I confess plenty of professional frustration) against the bank nor wish to cause them harm but because their IT systems are like the gift that keeps on giving – a book could be written using just their public-facing systems to illustrate how NOT to design, develop and operate Internet connected IT systems.

In November 2017 I again entered into correspondence with a main-board director of Credit Agricole, wishing on my part to make him aware of the serious flaws in the bank’s IT systems security and potential for unnecessary disclosure of confidential, personal data. That was my wish. You can guess my expectation. And … my disappointment when the letter I received was a boiler-plate text full of platitudes such as “Credit Agricole is constantly strengthening and auditing its information systems …”.

If the constant auditing claim was true then the only conclusion reachable is that either the auditors were incompetent – or their findings and advice had been ignored by the bank’s management for years.

The“constantly strengthening”claim was certainly untrue to my direct knowledge. Having first written to disclose serious system security flaws in 2015 I had seen the bank’s system security worsening over the intervening period. To give just one, final (of many), example of what was wrong my reply contained this paragraph:

Each and every page of the Credit Agricole web site incorporates common program code libraries which are up to NINE years out of date and which contain (in the versions used and in the case of just two of the libraries included in the CA web site) over 9,700 published and documented security flaws. In case you do not understand the implication – when a securityflaw is identified, it is first notified to the maintainer of the code library (as it is in this case) and time is allowed for the security flaw to be fixed. After the time has expired, FULL DETAILS OF THE SECURITY FLAW – INCLUDING SAMPLE PROGRAM CODE THAT CAN BE USED TO EXPLOIT THE FLAW is published. In simple terms, a teenager in a bedroom can look on-line, find and download program code designed to break into Credit Agricole online systems. As of today, your website contains over 9,775 such published security flaws.Think of them as 9,775 open doors into your bank vault through which any criminal could walk and help themselves to whatever they wished.

That’s right – CA hadn’t bothered to update its underlying system software for at least nine years. During which time (all software contains bugs, remember) over 9,700 security flaws had been found, fixed,and published. But all 9,700+ were still open to the winds on the CA web-site.

How difficult is it to monitor such bugs and their impact? The first response is that, if the system is being kept up to date, it should only be necessary as a back-stop – a secondary check that nothing has been missed. Recall the mechanism: security researchers are constantly trying to find bugs in software exposed to the Internet –when a bug is found it is reported to the developer who is allowed time to develop a fix and deploy (at least, offer) that fix to all affected users. Only then are details of the bug published so that other researchers can learn from it. This mechanism and the eventual reporting is a well managed scheme. The bugs are called “CVEs” (“Common Vulnerabilities and Exposures”) and a full, searchable database is available on-line at https://cve.mitre.org/ and at https://www.cvedetails.com/ and at https://nvd.nist.gov/vuln/search and at https://oval.mitre.org/ and at …

Enough bashing of Credit Agricole!

 This article is not an excuse to bash or expose the failings of asingle French bank. Sadly, from experience I could have spread examples of appalling practice across a number of organisations that I have encountered in just the last few years alone.

It just so happens that Credit Agricole provides a case study in how NOT to operate an Internet-facing IT system.

Perhaps the lesson to be taken here is that companies and the directors and senior management must become much more aware of the risks posed by poor Internet Security and the relatively simple steps that can be taken to avoid the majority of security risks likely to expose personal or corporate data.

No comments: