Skip to content

We’ve funded a range of projects to guide our roadmap for the future

Tokenised Authentication for Research Computing Services (TARCS)

HPC and other batch scheduled research computing services have historically been accessed via dedicated login nodes. These are specialised servers accessed typically via ssh and operating in a command line (text based) environment, and providing users with access to the resources of the service – namely the file systems used to store data and that batch scheduler which schedules work onto the compute cluster itself.

It is not typically possible to directly access resources such as the scheduler or file system externally, for example from a different research computing service or a cloud instance. In this project we aim to demonstrate making these resources directly externally accessible, in a secure manner, using tokenised authentication.

One of the limitations of traditional login nodes is that in order to maintain service security users cannot be allowed root privileges, preventing them from making certain customisations to their environment. Making these services available externally, for example to a user managed VM, would allow users to deploy a variety of research enablement which are not possible without root privileges, or which are not appropriate for a shared login node. These services include:

  • Continuous Integration and Continuous Development servers supporting research software development
  • Worfklow management servers allowing user groups to integrated external worfklows with the internal batch scheduler
  • Licence servers for research software

In this project we will demonstrate and assess the use of tokenised authentication to integrate the scheduler and file system services of the HPC service Cirrus with a VM in the University of Edinburgh Eleanor service, as well as with the IRIS Somerville cloud platform. A variety of workflows will be explored, and this approach will be assessed from both a user and technical perspective.

Lorem ipsum dolor sit amet consectetur. Justo ut dictum aliquet diam eu mattis aliquam…

Lorem ipsum dolor sit amet consectetur. Justo ut dictum aliquet diam eu mattis aliquam…

1

Inclusive Futures: User stories and mapping pathways for National Federated Compute Services

2

Federated AI application container platform / registry feasibility study

3

Enhancing HPC adoption through user-centred design

4

Federating Everything, Everywhere, All at once

5

UKRI research data landscape survey

6

Bridging the Gap: Aligning project administration with access to digital research infrastructure

7

Towards a common data infrastructure for laboratory science

8

Federated Edge–HPC architectures for AI workflows in privacy-sensitive and real-time domains

9

UNITED: A Framework for federated computing roadmaps

10

Exploring the requirements and technologies for a data centre API

11

Federation of compute and infrastructures in the arts and humanities

12

A federated bespoke AI-assisted helpdesk for DRI facilities

13

Surveying accessibility of federated compute for junior researchers

14

Job Orchestration using Constellation on Heterogenous HPC resources

15

Federated IAM for existing infrastructures

16

Evaluation of Secure Federated Kubernetes Storage for Trusted Research Environments

17

Net-Zero and Circular Economy Federation: Evidence-based Policy Roadmap for Carbon-Aware Compute in the Built Environment

18

Federated data movement

19

DRI Federation Cybersecurity Roadmap development

20

Identifying Barriers for Biology Researchers Using Federated HPC Services

21

Exploring the governance requirements for enabling UK DRIs to adopt MyAccessID

22

Supercomputers and Superpositions: Making Quantum Accelerators Accessible Within HPC Frameworks

24

FAIR-Compute: A Roadmap for Fair and Efficient Allocation of Federated Digital Research Infrastructure

25

ACCoRD (A Community for Contract Regulation for Data)

26

Federated data access across the DRI

© 2026 NFCS