Michael Popham is Digital Preservation Analyst at the Digital Preservation Coalition (DPC)
Every year since we launched RAM in 2019, we’ve organized a global “RAM Jam event” to give an opportunity for both new and experienced RAM-practitioners to come together and share their views and experiences of using the DPC’s Rapid Assessment Model. These sessions provide an ideal opportunity for DPC members to learn from one another, to pick up ideas about how best to undertake a RAM, and how to make the most of the results.
Last year, the RAM Jam took the form of a round-the-world relay race, in a series of four sessions spread across different global time zones. The idea was that each session would pass on the “baton” of lessons learned to the next stage in the race, until we’d completed one full circuit of this marathon 24-hour course – and had crossed the finishing line both elated and enlightened. You can read all about it in Jen Mitcham’s race report, posted to the DPC blog.
This year’s event adopted a similarly global approach but placed the emphasis on the journey rather than the destination – sharing local knowledge and insights along the way. And so before this metaphor becomes any more strained, I’ll just give you a quick selection of highlights from the trip.
For RAM Jam 2025 we set off at 03:00 UTC with members in the Australasian/Asia-Pacific regions. The first speaker was Jack Wain of Deakin University who explained their decision to undertake multiple RAM assessments. Deakin has a number of different collections that they felt would be in scope: enterprise and corporate data (records); Deakin Archives; research outputs (including HDR theses); and digitized Special and Cultural collections. There is a lot of cross-over between systems and teams who work on four core platforms, across more than six high-volume storage locations, as well as 3-6 distinct teams interacting with the content. They recognized that this environment raised two particular challenges: different priorities between the various teams (due to their fundamentally different material), and different levels of existing policy.
The team at Deakin decided to start their assessments small and focused, and to build up from there. In 2024-25 colleagues from both the Archives and Records teams jointly undertook a RAM assessment for Deakin Archives, and were pleased with how smoothly the process went. So for 2025-26 they plan to revisit and refresh their RAM for the Archives, and also create an additional RAM assessment for their Special Collections – working collaboratively with colleagues from metadata and discovery. If all goes well, in 2026-27 they will refresh the assessments for both Archives and Special Collections, and hopefully undertake a first RAM assessment for an additional area, probably enterprise and corporate data. They believe that taking this iterative and incremental approach will enable them to build-up their expertise with RAM, and to grow their understanding of digital preservation in each collecting area over time. RAM provides a useful internal audit, and information that can be fed into Deakin’s digital preservation strategy as part of an ongoing cycle of work.
Next up was Elizabeth Alvey from the University of Queensland, who explained how they had incorporated RAM into their Library leadership thinking and training. Like Deakin University, Queensland has multiple teams dealing with digital content spread across the organization, and undertaking a RAM assessment proved a useful way to identify key stakeholders. Earlier attempts at digital preservation planning by a working group in 2020-21 had been unsuccessful, and they felt that it was important to start over and re-establish trust between the various teams.
Elizabeth and colleagues began by setting up five themed focus groups, each looking at (different?) RAM capabilities. They used this approach to rebuilt trust and to provide occasions for genuinely open dialogue and honest input, focused on client delivery. They found that focus group participants liked the opportunity to share their real-world experiences, and it enabled the organizers to collect a range of insights that they were able to further explore in follow-up sessions. The focus groups provided a safe space for participants, creating a shared language and improved communication between colleagues, whilst always remaining solutions focused. They resisted the temptation to become obsessed with their RAM scores, and found it helpful to include a lot of qualitative data in their responses. Bringing groups of people together helped create a greater sense of ownership in the results, and mitigated the risks of potential bias or blind spots which could arise if the RAM is only complied by a single individual.
Queensland University Library has recently introduced a number of new digital preservation roles, and their RAM assessment has helped with the process of navigating change. Key themes that emerged from their RAM were presented as a written report and elevator pitch to the Library’s senior leadership, and will feed into their digital preservation programme starting in 2026.
Overall they found that the RAM was more time-consuming than they had originally anticipated, in part because they felt that it was important to look at RAM as a whole to ensure consistency across the levels. The use of focus groups (and simple strategies like making cotton wool sheep!) improved connections between staff and helped raise awareness of the fact that digital preservation is everyone’s responsibility.
Six hours later, at 10:00 UTC, we were in the UK for a meeting scheduled to suit members based in Europe and Africa. The session began with a presentation from Jonathan Bushell and Nick Kelly of the Institute of Charted Accountants in England and Wales. They described how, even though they had only undertaken two RAM assessments to date, it had provided some invaluable guidance on what they should do next, where they should prioritize their efforts, and a chance to reflect on what they managed to achieve. RAM helped them identify that missing or incomplete metadata was constraining progress, and provided the evidence they needed to get support to develop an AI powered solution. A subsequent RAM assessment gave them the data and metrics to demonstrate to colleagues and senior management that genuine progress was being made, and helped benchmark their work against comparable organizations.
Katie Waring then described how they had made good use of RAM at Lancaster University. Over the course of several years, they had made several well-intentioned but ultimately unsuccessful attempts to make progress with digital preservation; good work was happening piecemeal across the university library, but they knew they could do better. Undertaking a RAM assessment helped them gather data about where they were, where they wanted to be, and what needed to change – captured in metrics and summaries that were readily comprehensible by both colleagues and senior managers. Whilst the RAM has proven to be an invaluable advocacy tool, it seems clear that their most effective progress in digital preservation has been achieved by a bottom-up approach, concentrating on discrete issues and tasks that can be improved; they hope to use future RAMs to measure and demonstrate these positive changes.
The final speakers were Garth Stewart and Eve Wright, describing their work with RAM at the National Records of Scotland. Despite having undertaken successive RAM assessments in 2020, 2021, and 2023, they felt that their organization had not sufficiently taken the findings on board, and followed through on the changes that needed to be made. So in 2024 they took the conscious decision to make their RAM assessment a wider effort, bringing in more senior colleagues from across the organization, and having open and honest conversations about their RAM assessment scores. Being able to see the RAM data from previous years helped to provide the raw data and metrics needed to develop key messages for a new business case to procure a digital preservation solution and to involve colleagues from their IT department. NRS now plan to make an annual RAM assessment a business-as-usual exercise, with easy-to-follow metrics about where progress has, or has not, been made towards agreed goals. They also found that involving more colleagues in the RAM assessment process has helped spread the message that digital preservation is a responsibility that is shared across the organization.

Nine hours later we launched the third and final session in this year’s RAM Jam: starting at 19:00 and timed for members in the Americas (and Europeans who like to work late!).
We began with a presentation from Kari May, the Digital Archives & Preservation Librarian at the University of Pittsburgh, who described how she had organized their local Digital Preservation Working Group to undertake an interdepartmental RAM. They sought input from colleagues across several departments in the Library, and chose to adopt quite a formal and structured approach, for example by having regular meetings and ensuring that accessible definitions for any specialist digital preservation terms were produced and shared at every meeting. The scheduled a series of meetings to focus discussion on the assessment of each capability listed in the RAM, bringing together relevant stakeholders as necessary. Kari served as the one constant member of all the meetings, to ensure consistency in both approach and outputs. Whilst they were aware that this approach meant that their RAM assessment was perhaps not “rapid”, they felt it had been a very successful way to get colleagues engaged with digital preservation, and to produce an accurate picture of their current capabilities and agreed target levels. [Readers who wish to know more about Kari’s work should see the new Technology Watch Guidance Note mentioned at the end of this blog post.]
Next, we heard from Amanda Tomé, Preservation Coordinator at Digital Research Alliance Canada, who described the national DPC RAM benchmarking project undertaken by the Canadian Association of Research Libraries (CARL). The project grew out of discussions at the @Risk North 3 conference, which had identified the need to develop insight into the current status of digital preservation in Canada. They chose RAM because of its ease-of-use and flexibility, and because it was already familiar to some CARL members.
The benchmarking project has two phases: a pilot limited to CARL members which took place in the last quarter of 2025, and a subsequent phase using the lessons learned from the pilot aimed at the broader community (i.e. looking beyond research libraries) to gain a clearer understanding of digital preservation across Canada. Amanda highlighted how they had set up separate English language and French language office hours, to support as many people as possible through the process.
Amanda summarized the lessons already learned from the pilot phase of their benchmarking work:
-
The digital preservation community in Canada is quite diverse (e.g. institutions may have more than one department responsible for DP with different governance/reporting structures, infrastructures, and resources).
-
“Rapid” does not always mean fast! Participants noted that it took at least 8 hours to complete a RAM assessment – and that involving more people meant the process took longer. There was also a difference between institutions conducting their first RAM assessment versus subsequent assessments, and the extent of participants’ digital preservation knowledge was also a factor in how long assessments took.
-
The consensus was that having a dedicated project geared towards benchmarking meant that benchmarking was more likely to happen. It had fostered a sense of community, and people appreciated that they were not working in isolation.
-
It is important to get the balance right in terms of the quantity and quality of supporting resources that were made available to participants, as too many resources could feel overwhelming.
Overall impressions at this stage are that the benchmarking project has fostered a valuable sense of community amongst digital preservationists working across Canada, and that further lessons are likely to emerge from the whole process. It had already become apparent that context is key to the assessment results, as findings can vary widely given that people, organizations, internal arrangements and ways of working etc., differ so much.
The third presentation in this session came from Sean Macmillan, Digital Collections Manager of the library at Kings College London (KCL). Sean described how he had created an online game to introduce people to DPC RAM and the digital preservation concepts that underpin its use. In response to an invitation from colleagues at the National Library of Nigeria, Sean used Twine to create a linear interactive game to help explain the process and decision-making that might lead someone to use a resource like DPC RAM.
Sean gave a brief life demonstration of the game. He stressed that it was important to be clear and careful about any specific language and terminology used, to help ensure that all players of the game understood the topic. It was also important that the design of the game was not to catch-out players, but to guide them through a series of scenarios and multiple choice questions that would lead players to some feedback and additional resources that would be useful in the given scenario. As players proceed through the game, they are gently introduced to each of the 11 capabilities covered by a RAM assessment.
Sean concluded by describing how he has introduced games into several of his induction and training courses at KCL. He strongly believes that they can be an effective way to communicate ideas and concepts, and are particularly well-suited for use with colleagues who find traditional teaching and learning methods to be less successful.
The final presentation in this session, and of the day-long RAM Jam, was by Teresa Soleau of the J. Paul Getty Trust. She outlined how they have used RAM and its approach to continuous improvement to inform and drive the work of a small preservation team and colleagues working in different roles across the Trust.
Teresa first undertook a RAM assessment in 2023, and used the findings that emerged to engage with colleagues at an event organized for World Digital Preservation Day that year. One of the key findings of the assessment was that they had a significant amount of digital content that was effectively unmanaged, and it also highlighted several other areas of digital preservation capability where they could implement relatively small changes to improve their RAM scores.
A second RAM in 2025 showed that they had managed to level-up in two areas, and had succeeded in getting digital preservation included in the organization’s 5-7 year roadmap. Teresa outlined how the findings of their two RAM assessments had helped build the case for additional staff (albeit on time-limited funding), and had been used to highlight those aspects where the Getty was doing well and where changes could be made to improve their RAM scores in other areas too.
Looking ahead, the Getty are now considering undertaking RAM assessments for specific areas of activity (e.g. Museums), and looking at using related tools – like the Competency Audit Toolkit (CAT) – to inform the development of a further business case.
The session concluded with some general discussion and observations about using RAM. All the presenters agreed that undertaking a first RAM assessment had been more time-consuming than subsequent iterations, but repeating the exercise revealed valuable metrics regarding areas where progress had or had not been made. Engaging more colleagues in the RAM assessment exercise was generally felt to be a worthwhile option, provided sufficient effort and resources are put into ensuring a common understanding and shared vocabulary/terminology amongst participants. Several people noted that whilst it felt relatively straightforward to get consensus on where their organization scored in its current capability levels, getting agreement on target levels for where the organization should aspire to be could be more time-consuming and contentious. Whilst there was an acknowledgement that people’s understanding of the term “rapid” could differ significantly between individuals, organizations, and contexts, the value to the organization of bringing people together to undertake a RAM assessment and to review its findings – especially if this is done on a regular basis – cannot be overstated.
To coincide with this global RAM Jam, the DPC launched a new publication in its series of Technology Watch Guidance Notes Coordinating an Interdepartmental Rapid Assessment Model by Kari May of the University of Pittsburgh Library System, on member-only preview until 5th January 2026, after which it will be made freely available to all.












































































































































