40 Years Of Compound Management Evolution - Part 2: The Advent Of Automation

Roger Martin
The Compound Management process has changed significantly since I started working in discovery research as a medicinal chemist in 1978. Part 2 discusses the beginning of automation:


<< Read previous section 

Making progress: mid 90s

The beginning of automation

Mergers in the drug discovery industry often meant that several company’s compound collections were brought together. The concept of High Throughput Screening (HTS) in the mid-90s meant samples began to be actively collected from scientists to create a screening collection. With this came the start of the formation of Compound Management (CM) departments as we know them today.

At my company, the need to build HTS collections and manage samples drove a policy that all samples made should be sent to the local CM department for distribution to the project biologists. Initially there was some resistance, and for a while the chemist could keep some sample for the project biologists and send the rest to CM; but it soon became the norm for CM to handle it all.

HTS brought laboratory automation, from companies like The Automation Partnership (TAP) and RTS Life Science, to prepare the compounds in a suitable format for screening. This automation needed data  to operate, as well as generating and processing the final output data – a significant increase in data compared to the manual methods described earlier. Hence, with lab automation, came software to manage the data.

Initially, software applications were developed to address the specific platforms being used. This meant that whilst databases of amounts, locations and so on for compounds were created, these databases were not integrated together as their primary focus was on the automation task. Also, whilst solutions in 96- or even 384-well plates were well suited to rapid screening of a compound collection, they were not suited to ad hoc supply of compounds to project screening.

automated-store_Titian063-IMG_4102

What changed this situation was the introduction of automated sample storage for both solids and solutions. Now information about the individually accessible compounds was available in databases. Whilst these databases were not associated with an outward looking laboratory inventory management system, for the first time they could be accessed with software like MDL ISIS, Pipeline Pilot and InforSense. These could connect databases in an ad hoc fashion, which started to provide availability data direct to the scientist. From this, a number of different laboratory inventory management systems started to appear, sometimes with overlapping functionality.

The Available Chemicals Directory, an online database of commercially available chemicals, improved scientists’ access to structure searching. Whilst this was mostly used for searching for reagents for chemical synthesis, it did contain some potential drug-like molecules. New databases became available. Companies like SPECS actively collected more commercially available drug-like molecules. Initially these companies’ databases were relatively small and each company had their own database which meant that we had to collate them internally in order to make them available to our scientists.


Making progress: communication & integration

Modern times

Following a spate of pharma company mergers around 2000, many Compound Management teams had a variety of legacy software, some doing essentially the same job, and mostly not communicating easily with each other. At the same time the amount of lab automation used by Compound Management also expanded, bringing the need to integrate it all into a co-ordinated process, managed by a single coherent system.

At my company, programming and deploying custom compound management software required a lot of resource. At its peak, about 50 people were working on it, including 3 project managers, each running a number of projects; together with a programme team overseeing it, of which I was the deployment manager.

Many sites with different release schedules meant that, for a time, different versions of the new software were required at different sites. In those days, software had to be installed on every CM computer that wanted to use it. We therefore had to address which software version was needed and how to keep it up to date. This was done by installing and running the application via a ‘stub’ program. It would check the inventory location being accessed and the software version installed on the user’s PC and update the application accordingly before running it.

To Compound Management services, they were simply running the application at site “X” but the ‘stub’ ensured they ran the correct version for the task. Nowadays, web-based software means the user runs the appropriate webpage URL and no installation is required on the client PC.

‘Requesting Systems’, which allowed scientist to request assay samples, were consolidated around the same time. Although they showed stock availability and available project orders that could be requested, the system was decoupled from the compound management software and did not directly add requested items to orders, rather it alerted CM staff to a new request.

This decoupled system made integrating the new CM applications during the software consolidation process relatively easy. However, availability data was an approximation. Until CM transcribed the request into an actual order there was a danger that sufficient stock might not be available (for example there might be multiple requests pending for the same sample).

Finally, with coherent compound management software came the standardisation of “Immediate Processing”. Scientists and CM defined what assays were required to be run and the software guided CM on which orders to create for each compound. Because of the integration between the software and the automation needed to process the order, the required amount of compound could be calculated, and if there wasn’t enough it could be fed back immediately.

Initially, Immediate Processing was driven by pre-agreed codes handwritten on a submission label, which was then entered into the software by the CM operative. However, once the pre-barcoded vial was registered into the system, the labels were removed as everything was now driven by the vial barcode. Since the label was temporary, the next logical step was to remove it entirely and directly associate the contents with the vial barcode in a web-based Sample Submission system.

What had been achieved

Changes between 1978 and 2012 (when I left the major pharma company) meant it had become easy to find what samples were available to use in a particular order using compound management software. Given a list of compound identifiers I could select assay(s) and see if sufficient sample was available – not just at one physical site, but across the whole company – and place my order seamlessly. Compound Management services would receive my order and manage delivery in the most efficient way.

Read the next section >>

 

Download our white paper

Download our white paper: The Essential Guide to Managing Laboratory Samples

In any case, consider talking to one of our Titian experts – there is a better way to manage your samples!

After More?

Subscribe to our blog updates

Stay up to date with the latest news

Subscribe Here!