Capacity and utilisation in the supply chain with logistics expert Kirsten Tisdale

February 04, 2020 by Kirsty Adams
  • Share on LinkedIn
  • SHD Logistics News RSS
  • Email this page
Capacity and utilisation in the supply chain with logistics expert Kirsten Tisdale

Kirsten Tisdale describes a technique she has adopted for measuring and monitoring the relationship between capacity and utilisation in the supply chain.

A key area of focus in the supply chain always has to be the relationship between capacity and utilisation. So, I’m sharing a technique I apply in Excel to all sorts of time-related data in logistics operations, enabling both visualisation of the detail of what’s going on as well as top level utilisation measures. The graphic I’ve chosen is for vehicles, but this is a wide-ranging technique with lots of application.

WIDTH OF APPLICATIONS

I first used this technique when I was working with a colleague at Marks & Spencer on what was called the Unified Fleet Project – examining to what extent we could reduce vehicle numbers if we considered them as one fleet, rather than attached to a specific food or general merchandise depot (answer, c10%). I’ve also used it with more than one client to review yard operations, by analysing when vehicles including suppliers are in the yard rather than out on the road, extending the analysis to bay/dock utilisation for multi-temperature operations.



Very recently I used the same technique looking at peak volumes and dwell within a cross-dock operation (early arrival doesn’t necessarily equal better), and I’ve also used it to delve into picking congestion within aisles.

PICKING CONGESTION

Some while back I did a long project helping a well-known supermarket develop the concept for its first partly automated grocery home delivery operation. While I was there, I was also asked to review aisle congestion at its existing and totally manual operation in an older site which wasn’t suitable for mechanisation.

Some aisles were starting to get congested, and there was big growth expected. I collated lots of data on real picking trips, but I also spent time in the aisles: what does congestion in the peak hour look like – how much activity does it take to create congestion?

I used that data and the technique I’m describing to analyse how many people were in which aisle every quarter-minute [BJ1] for the peak hour. And then produced a number of layouts using this analysis to show which aisles were affected by congestion right now, and how bad it was likely to look when projected for five years.

I worked with colleagues at the location and in head office to work out what it could look like if multiple targets were achieved. These were the target areas we came up with. Flatten the peak hour of week – probably by reducing the size of the peak day. Improve the hit rate of trays on a trolley that accessed busy aisles – how could we reduce the number of trolleys that need to visit that aisle? Speed up time for the actual pick – reduce the time that trolleys have to spend in that aisle. Mix and match products in busy/quiet aisles – don’t have ultra-busy and ultra-quiet aisles.

TIME-RELATED DATA 

What you need to get started, is time-related data. In the situation described above, picking data, and in the case of the graphic, a vehicle schedule – I chose this latter example as the hour-by-hour analysis will fit on this page!

Behind the shaded bars in the main block are some equations that throw up 0, 1 or 2 in each cell as a result of comparing the start and return times of the vehicle with the time of day, and the type of vehicle required. The colouring of the bars is achieved by conditional formatting of cell fill and font – grey for artic, green for rigid.

The block at the bottom shows total hourly usage by vehicle type (by counting 1s and 2s in the main block) and is then also conditionally formatted to show when all vehicles are in use (light red) or the number available is exceeded (red). Together these blocks allow you to visualise what is causing that over-capacity situation and consider which trips might be ‘jigsawed’ together – either without causing any issues at all, or perhaps with minor changes.

LOOKING FORWARD

In the grocery home delivery case study above, I mentioned the idea of making projections so you can get a feel of the pressures that a logistics operation might be under in, say, five years. This is going to form the basis of my next piece: in making projections or forecasts it’s important to understand assumptions which have been made, and their degree of robustness.

 


Big data, little data and data mining 

In the first of a series of data-focused articles, Kirsten Tisdale explores big benefits in little data. December report from SHD Logistics.

There’s a lot written about the hot topic of big data – the massive volumes of data that are too large or complex to be dealt with by traditional processing. Plainly, if you have absolutely masses of fast-growing, and possibly unstructured, data and the AI to deal with it, that’s great. And by the way, while we’re on the subject of definitions, medium data is data sets that are too large to fit on a single machine, but don't require huge numbers of them.

But that’s not what this series of articles is going to be about. This series is mainly going to be about improving your logistics and supply chain by using the smallish data widely available in the workplace – data you can collect, analyse and model with affordable tools on your own laptop.

Now, I thought I’d come up with the expression ‘smallish data’ but, no, it was already out there by 2004 in a book on data mining techniques, although my definition would be a bit larger than a few thousand rows and less than a dozen columns – I’d say tens of thousands of rows is more typical these days.

Small data detail

And then there’s small data, which is what I’m going to talk about in this particular piece, because sometimes it pays to look at the detail.

A client sent me more than a gigabyte of data with the comment that, since the company had started sourcing from China and the Far East, the DC was getting fuller and fuller, and no-one could understand why.

I started off analysing the data in bulk but didn’t really get anywhere – I just couldn’t understand what the data was telling me… or rather, it was telling me something strange. So I decided to look at some particular SKUs where I could see there was a problem.

As you can see from the chart, there didn’t appear to be much in the way of store replenishment, but that didn’t seem to be affecting store sales. And I could see inventory level rising and rising as further shipments arrived. As soon as I showed this individual SKU level analysis to the client, he responded by saying: oh no (or words to that effect) – the stores are obviously still buying that themselves! I hadn’t been told that the stores could procure merchandise at a local level and it hadn’t occurred to me that this was what was giving me my data conundrum.

So the cheaper sourcing from China was being completely ignored and, after a couple of initial centrally-organised pushes of stock, the stores had just reverted to doing what they’d always done.

Business intelligence

This analysis was carried out in Omniscope from Visokio, a business intelligence, data manager and visualisation tool which has become more affordable. But there’s also Tableau, another data visualisation tool, recently bought out by Salesforce. These sorts of tools make it easy to analyse multiple fields simultaneously to query your data, and to filter it – seeing the impact of eliminating outliers for example. And in this case, filtering down to single SKUs.

And I’m a big fan of Excel. I know there are people who like to laugh at the idea of using Excel in this day and age, but I like it: it’s simple when nothing else is needed and easily shared with other people (a big plus), good for ‘what ifs’ in something like a cost-benefit analysis, easy to add complexity to models when required, and there are no hidden mysteries – anyone can work through the answers a model has generated.

The lesson from this case study is that if the big picture doesn’t seem to be making sense, try looking at a much smaller sample and see what it’s telling you.

Benchmarking & warehouse location

In the next few articles, I’ll be concentrating on smallish data and I’m intending to look at a variety of case studies from benchmarking to warehouse location, specific techniques to quite complex logistics models, the analysis and modelling, and the results and lessons that come from them. My approach is always “what’s the data from your operation saying?” rather than “this is what they did in the last warehouse I was in”. As I said at the start, this series is going to be about using data to improve your logistics and supply chain – turning data into action plans.

Kirsten Tisdale, principal, Aricia

Kirsten Tisdale is principal of Aricia, the logistics consulting company she established in 2001, specialising in strategic projects needing analysis and research. Kirsten is a Fellow of the Chartered Institute of Logistics & Transport, with a career spanning both sides of the logistics relationship as well as consultancy projects.

Images


What's related

Most popular this week.