Supply Chain Data Management and Control
Document
type: Micro-report
Author:
Simon Pain BA(Hons) ACA MCIPS, Calum Upchurch
Publication
Date: 27/09/2016
-
Abstract
Effective and accurate monitoring of the risk profile of a huge and diverse supply chain on a complex project requires the ability to collect data and store it in a safe and reliable place. For many people an Excel spreadsheet is the first, and only, choice. The Crossrail supply chain team question whether that really is the best option and suggest a better solution for future projects. This micro-report will be of interest to supply chain managers in projects of all scales.
-
Read the full document
Introduction
The Programme Supply Chain Team (PSCT) within Crossrail was tasked with overseeing supply chain activity on the programme. This required monitoring companies bidding for work at subcontractor level (i.e. working for main contractors) and recognising their apparent capacity constraints and financial risks. The task required the management of large amounts of data.
The size of the problem
For every major direct (Tier 1) Crossrail contract, the main contractor needed to award numerous sub-contracts in order to be able to deliver the work. On main station works contracts, for example, there were dozens of sub-contract (Tier 2) packages, covering diverse requirements from piling to architectural metalwork to cladding. Due to the large quantity of individual sub-contracts, PSCT had resources only to monitor those agreed with each main contractor as being ‘critical’, primarily considering value, lead-time, availability and the critical path. Using the Pareto technique, critical packages numbered around 20% of the total packages required, but comprised approximately 80% of the total value. This still meant the team monitoring almost 2,000 packages and 6,000 tenderers throughout the project.
PSCT gained information on the progress of Tier 2 contracts from several sources, the most common being Main Contractors’ procurement schedules. These were documents produced by each main contractor containing every anticipated sub-contract package, the key programme dates for each, budgeted/awarded values, and the names of the tenderers and successful sub-contractors with company registration numbers. As the procurement process for each package moved forward, the document was updated to reflect the latest tender lists and successful contractors. Additional updates were also gained at meetings with Tier 1 procurement managers and by regular telephone/email contact.
All information gained from contractors was placed into a central “database”. In fact, this was a substantial Excel spreadsheet that recorded every instance of every company that appeared on a tender list for any critical package across the programme. If a company was tendering for ten critical packages across the programme a complete set of data for that company was entered ten times. Details on the individual package such as the package ID, value and working dates were recorded, plus company specific information including turnover, number of full time equivalent staff, address and financial failure score, from the Dun & Bradstreet data tool. This mix of information enabled the tracking of financial risk and allowed the monitoring of cross programme capacity utilisation, by comparing package values to current annual turnover. PSCT was therefore able to assess the potential usage of a company across the programme taking all sites into account.
The impact
There were several limitations with this method, which became clear over time, starting with obtaining information from the Tier 1 contractors. The procurement schedules were not actually dedicated supply chain management documents, meaning that they grew into substantial Excel spreadsheets with a large number of columns to capture the huge volume of data that was requested. The flexibility and ease of use of spreadsheets meant that each contractor produced a different style of document with variations of both lay-out and content. The lack of commonality meant that any form of automation was impractical. For example, in some cases lines of data were placed in single cells, making it difficult to separate out different names on a tender list. In others cells were merged.
The “database” itself was limited in both capability and ease of use. Whilst a spreadsheet is the default choice for many people due to their familiarity with the Excel software, it is not necessarily the most suitable tool to permanently manage large amounts of data. With nearly 4,000 rows and over 70 columns the spreadsheet was certainly not agile. Each individual tender had to be added as a new line to the spreadsheet with each cell then manually populated. Ten tenderers would require ten lines to be added, over nearly all the columns. A large tender may require 1,000 data items, a hugely time-consuming exercise, even when using the various copying facilities available in Excel.
In hindsight, specialist database software, such as Access, may have been better suited to storing and manipulating large amounts of data. However even Access would still require repetition in regards to data input. Copying information from procurement schedules into a separate database, and maintaining it, also introduces scope for human error.
A solution
The best solution would comprise a single storage location for the whole project, which each main contractor would use to create its own procurement schedule. All changes to the schedules would be made in this location, backed up by a genuine database, which would be used by the supply chain team for monitoring. Not only would this eliminate the need to copy data to a new location and format, it would also mean the project could have true visibility of every Tier 2 package and tendering company on the project. A simple “Critical Package” flag could be used to select those to be subject to more detailed monitoring. Any changes or updates made by main contractors to procurement schedules would be immediately visible to PSCT. PSCT would be using the same data, rather than merely a copy of it.
The essential fields required for each package were: package name, package reference number, tenderer names with company registration numbers, ITT date, Start on Site date, Completion date and package value. As long as each main contractor provided this information, all other data such as address, voting constituency and turnover could be drawn from Dun & Bradstreet, Companies House and other sources based on the registration number. This could be automated fairly easily and would ensure consistency of data input.
Benefits could also be gained from a standardisation of key package terms. When attempting to take a programme-wide view of packages such as Platform Edge Screens, Flooring, or Brick & Blockwork, it was frequently evident that the variations in descriptions across different main contracts posed a problem. By insisting on standard names for certain key packages at the outset (a true “common procurement vocabulary”), analysis across the programme would be quicker and more accurate.
Regular requests from other departments to produce reliable statistics on the Crossrail supply chain meant large amounts of time were spent consolidating data from several sources (e.g. the Crossrail Finance Directorate and CompeteFor), because the database captured only the critical packages, a subset of the total activity on the project. With all Tier 2 activity in one location, this exercise would be far simpler.
Lessons learned
Duplication of data is unnecessary and an inefficient use of time. The ultimate goal should be for both the main contractors and supply chain function to use exactly the same data in a single location, rather than a copy of the data, laboriously reproduced by hand .
To deliver this objective organisations should:
- Identify suitable, customisable software at the outset for storing and managing data. This needs to be robust enough to handle thousands of entries and multiple users working at different sites.
- Set out specific data protocols to main contractors at the outset, defining the exact information required on an ongoing basis.
- Identify ways that the dataset will need to be analysed throughout the programme, such as voting constituency, SME status etc. and ensure all these aspects are included in the chosen database.
-
Authors
Simon Pain BA(Hons) ACA MCIPS - Crossrail Ltd
Having qualified as a Chartered Accountant Simon has performed a number of finance related roles in the public and private sectors, working currently for Transport for London and before that a major UK retail bank and a City based investment management firm. Disciplines covered to date include insolvency, procurement, business case appraisal, management accounting and supply chain management. Simon’s role on Crossrail is to monitor and manage financial and capacity risk in the supply chain, support and advise main contractors and promote SME engagement in the supply chain.
Calum Upchurch - Crossrail Ltd
Supply Chain Specialist – Supply chain professional with significant experience in the rail industry.