Hilltop Digital Lab

Conversational Health Engine - Collaborative Knowledge (CHECK)

CHECK is a suite of quality assurance and analytical tools for providing insight into health conversations and related interactions. It intelligently integrates information from unstructured text and activity data, combining evidence-based behavioural markers in explainable models to enable prediction and stratification, and power timely interventions to improve engagement and outcomes.

Features

  • Inclusive, sustainable design Co-developed using Responsible Research & Innovation
  • Helps tailor interventions to match individual stage of change
  • Applicable to wide range of health conditions and comorbidities
  • Vulnerability alerts to support stratification, safeguarding and escalation
  • Scaffolds coaching responses to capitalise on states of opportunity
  • Promotes engagement, efficacy. Builds skills for sustainable self-management
  • Enriches support - leverages bonding, bridging and linking social capital
  • Assesses compassion fatigue to provide early warning of staff burnout
  • Scalable solution - improves quality of care and increases access
  • Reduces administrative burden via conversational insights, summaries, and smart replies

Benefits

  • Incorporating Explainable Artificial Intelligence (XAI) to provide transparency and trust
  • Collaborative machine learning with Humans-in-the-Loop controlling final decisions
  • Peer reviewed by Greater Manchester AI Foundry technical ethical experts
  • Grounded in evidence-based transdiagnostic psychological theories of motivation and change
  • Employing leading edge transformer language technology based on open models
  • Active learning to capture professional expertise and maintain currency
  • Improved predictions through integration of EHR and engagement data
  • Automated clinical and patient summaries of key interaction and patterns
  • Risk and triage models aligned with clinical guidelines and governance
  • Securely hosted, interoperable with Clinical and Personal Data Stores

Pricing

£2,000.00 to £12,500.00 an instance a month

  • Education pricing available

Service documents

Request an accessible format
If you use assistive technology (such as a screen reader) and need versions of these documents in a more accessible format, email the supplier at gareth.roberts@hilltopdigitallab.com. Tell them what format you need. It will help if you say what assistive technology you use.

Framework

G-Cloud 13

Service ID

7 3 1 9 6 7 7 0 3 8 2 3 9 8 4

Contact

Hilltop Digital Lab Gareth Roberts
Telephone: 07949049702
Email: gareth.roberts@hilltopdigitallab.com

Service scope

Software add-on or extension
Yes, but can also be used as a standalone service
What software services is the service an extension to
HD Labs' Data Orchestration Ecosystem
Cloud deployment model
  • Public cloud
  • Hybrid cloud
Service constraints
CHECK is an open-source cloud-based platform that utilises the market-leading cloud providers and open source component service offerings including Kubernetes and Kafka to manage event-driven analytics from conversational sources.
System requirements
  • Access to cloud platform provider ie. Azure, AWS or GCP
  • Internet Browser

User support

Email or online ticketing support
Email or online ticketing
Support response times
As part of the support process, HD Labs provide a dedicated email address, telephone number and web portal to be used to raise support tickets.

During normal business hours (weekend by prior agreement) HD Labs will respond within 4 hours.
User can manage status and priority of support tickets
Yes
Online ticketing support accessibility
WCAG 2.1 AAA
Phone support
Yes
Phone support availability
9 to 5 (UK time), Monday to Friday
Web chat support
No
Onsite support
Yes, at extra cost
Support levels
Standard support is covered in the cost. Service levels are based on priorities

1) Critical Incident: eg Affecting all CHECK users. Response 1 hour & Resolution 8 hours

2) Major incident: eg Component / model failure for users.Response 2 hours & Resolution 2 days

3) Minor Incident: eg performance impacted for some users. Response 4 hours & Resolution 5 days

4) Request for change: eg, new functionality requested. Response 2 days & Resolution Planning 28 days and is based on a maximum

Support calls are routed to 2nd line support as appropriate.

Customers have access to a named account / project manager.

On-site support is available by arrangement and is charged on a time and materials basis (see Rate Card)
Support available to third parties
Yes

Onboarding and offboarding

Getting started
HD Labs follow a multi-phase approach to the onboarding process, which provides a robust, agile and rapid approach to capability provision that has been tested, refined and proven.
The phased approach and sequence follow the Government Digital Service i) Discovery ii) Alpha iii) Beta iv) Live recommendations.
Service documentation
Yes
Documentation formats
  • HTML
  • Other
Other documentation formats
Wiki
End-of-contract data extraction
The CHECK is a suite built on customised open-source models which remain open source.

HD Labs can export all existing data in its platform into raw formats. CHECK is an Open Source Platform built on open standards and designed to prevent vendor lock-in. All data in the platform can be securely exported in non-proprietary formats including open source databases for use in other systems or databases. HD Labs will work with the customer to determine the best export format for their destination systems.
End-of-contract process
CHECK is built on an open-source infrastructure and technology stack removing platform license costs and allowing local development teams to build and extend the platform as required.

At the end of the contract, the customer can extract the data in an open format or arrange for a specific format for output, costed in line with the rate card and with assessment with the customer.

Using the service

Web browser interface
Yes
Supported browsers
  • Internet Explorer 11
  • Microsoft Edge
  • Firefox
  • Chrome
  • Safari
  • Opera
Application to install
No
Designed for use on mobile devices
No
Service interface
Yes
User support accessibility
None or don’t know
Description of service interface
The visual front end management interface provides the ability to build and manage the workflow of data contracts between the source and consuming systems. Workflow tasks include analysing sentiment or topic relevancy etc and sequencing the analysis to a destination output location. The visual interface provides authorised users such as Information Governance leads to approve or reject data contracts. etc Once a contract is approved this is converted into the infrastructure resources needed to support the data contract including any roles, service accounts, pods, services, configurations, secrets, Kafka topics, load balancers, DNS and subdomains as required.
Accessibility standards
None or don’t know
Description of accessibility
The service interface is a technical management console allowing users to set up and manage data contracts and the workflow that occurs as data transits CHECK. The interface is a ReactJS implementation using Material UI and allows users to drag and drop components onto the canvas and sequence them by drawing lines between components.
Selecting components displays the properties associated with that component. Once a contract has been designed it is submitted to an approver role to be approved. The design follows the principles of being clear, robust and specific.
Accessibility testing
Testing has been conducted manually using tools such as Selenium across a number of different browser types.
Accessibility features such as "dark mode" are enabled and the interface is compatible with browser and windows magnifiers.
API
Yes
What users can and can't do using the API
Users can call the API to process data and receive back scores against specific behavioural markers for the unit of analysis of interest (interaction, person, group) plus supporting explanation information.

Users can create a variety of dynamic data flows and API gateways. These APIs allow customers to send data onto or receive data from one or many other systems as defined in the integration contracts.
The CHECK Supports the following interfaces and standards;
- SOAP
- REST
- RPC
Standards supported include: TEXT, CSV, JSON, XML, HL7,

APIs are available to stream audit data to a trusted audit aggregator.
Users cannot use the APIs to run services which are not in the component registry.
API documentation
Yes
API documentation formats
  • Open API (also known as Swagger)
  • Other
API sandbox or test environment
Yes
Customisation available
Yes
Description of customisation
Customisation can take place in a number of areas including i) workflow sequencing ii) Data enhancement/enrichment ii) model development - we can integrate your key metrics into the model calculations to provide enhanced predictions.

CHECK is built on open-source architecture therefore components such as new validation routines, data enrichment sources and transformation can be generated without proprietary knowledge. Additionally, the way the architecture is built is a modular micro services approach orchestrated by Kubernetes where the components that are required are micro and do not require complicated call and service setup as this is handled by Kubernetes. Data routes can be customised further by defining aspects such as cost centres for resource usage and thresholds by scaling resources up or down in response to changes in demand. The open-source architecture of CHECK allows for extensive customisation.

Scaling

Independence of resources
Performance and scaling is central to the core design of CHECK. CHECK is centred around the setup of Kubernetes pods and the use of Microservices to ensure that workflow and scoring components are separated and operate independently from one another. This means that routes consuming high volumes of activity scale in line with demand and do not affect other routes using moderate demand.

Once the infrastructure is provisioned it is monitored by another controller service to manage demand including scaling back when demand is lower.

Analytics

Service usage metrics
Yes
Metrics types
Metrics are provided for
i) Each Data route and include cost per route, storage used, compute time, number of transactions, transaction size
ii) Support provided including response and resolution against the SLA, User requests and tickets per data route comparison
iii) Ecosystem overall including cost, compute time, uptime, transaction numbers and storage
iv) model accuracy (precision, recall, F1 score)
Reporting types
  • API access
  • Real-time dashboards
  • Regular reports
  • Reports on request

Resellers

Supplier type
Not a reseller

Staff security

Staff security clearance
Conforms to BS7858:2019
Government security clearance
Up to Developed Vetting (DV)

Asset protection

Knowledge of data storage and processing locations
Yes
Data storage and processing locations
United Kingdom
User control over data storage and processing locations
Yes
Datacentre security standards
Managed by a third party
Penetration testing frequency
At least once a year
Penetration testing approach
Another external penetration testing organisation
Protecting data at rest
  • Physical access control, complying with CSA CCM v3.0
  • Physical access control, complying with SSAE-16 / ISAE 3402
Data sanitisation process
No
Equipment disposal approach
A third-party destruction service

Data importing and exporting

Data export approach
A rich set of APIs enable data to be accessed as it transits CHECK and is validated, enriched and transformed via the workflows described in the Datacontracts. As a result, the data can be exported in any number of formats appropriate to the destination systems and their capabilities including topic-based queues, databases, or messages conforming to open standards such as Fhir, HL7, IHE or to openEHR.
Data contracts are set up to enable data to be passed in a managed way.
HD Labs will work with the customer to determine the best export format for their destination systems
Data export formats
  • CSV
  • ODF
  • Other
Other data export formats
  • Open Source Database
  • FHIR
  • TXT
  • JSON
  • XML
  • HL7
  • CSV
Data import formats
  • CSV
  • ODF
  • Other
Other data import formats
  • FHiR
  • XML
  • JSON
  • TXT
  • XLSX
  • Doc

Data-in-transit protection

Data protection between buyer and supplier networks
  • Private network or public sector network
  • TLS (version 1.2 or above)
  • IPsec or TLS VPN gateway
Data protection within supplier network
  • TLS (version 1.2 or above)
  • IPsec or TLS VPN gateway

Availability and resilience

Guaranteed availability
Level of availability varies depending on the specific project and requirements.
Approach to resilience
The underlying cloud infrastructure is highly available across multiple availability zones. Each zone is designed to eliminate single points of failure (like power, network & hardware). As example by utilising the three availability zones in the AWS London region we are able to protect against a disaster at any of the datacenters.
Within AWS implementations; The Amazon Elastic Container Service for Kubernetes (EKS) cluster is distributed across the three availability zones and the workload is spread across the nodes using logic inside of the kubernetes service. In the event of an outage, the workload will be redistributed across the remaining nodes and the autoscale configuration allows the EKS clusters on the remaining sites to scale outwardly to cope with the reduction in service. The Amazon Managed Streaming for Apache Kafka (MSK) cluster is distributed across the three availability zones and the queues partitions are automatically replicated to the other zones to ensure no data loss. The Amazon Elastic MapReduce (EMR) service is distributed across the three availability zones and the workload is spread across the nodes using the MapReduce functionality.

In addition, all infrastructure and workflow processes are as code therefore they are designed to be repeatable and consistent.
Outage reporting
In addition to the monitoring tools in place CHECK can email pre-defined customer and support groups.

We can potentially also automatically raise tickets on a variety of support desks that have API integration capabilities.

Identity and authentication

User authentication needed
Yes
User authentication
  • 2-factor authentication
  • Public key authentication (including by TLS client certificate)
  • Identity federation with existing provider (for example Google Apps)
  • Limited access network (for example PSN)
  • Dedicated link (for example VPN)
  • Username or password
  • Other
Other user authentication
An authentication provider will provide functionality for the system to log in to other applications and to authenticate other applications when they are making calls to CHECK.
CHECK defines scopes for permissions, these scopes will be mapped by the authentication provider to state what access an individual or application has when they are logged into our platform.
The authentication provider will provide policy-based access controls for any users or applications accessing the system. These roles should limit access to only what is needed based on a variety of attributes which will be assessed during the login process.
Access restrictions in management interfaces and support channels
HD Labs solutions are cloud-agnostic. However, as an example, 2-factor authentication is required on the AWS Management Console to manage the AWS account. For Infrastructure changes, 2-factor authentication across a VPN connection is required. VPN is established using public-key authentication. Access is a restricted Roles based access control to the management interface that allows users to set up and approve/reject electronic data contracts. Similarly, the Service support channel via a dedicated web portal requires a username and password in order to log in and raise a support ticket and view the progress of an existing ticket.
Access restriction testing frequency
At least every 6 months
Management access authentication
  • 2-factor authentication
  • Public key authentication (including by TLS client certificate)
  • Identity federation with existing provider (for example Google Apps)
  • Limited access network (for example PSN)
  • Dedicated link (for example VPN)
  • Username or password
  • Other
Description of management access authentication
We provide OAUTH2 services via keycloak, this can plug into a variety of other OAUTH2 services such as CIS2, NHS futures, as well as providing capabilities of providing authentication through other methods such as SAML. This can be extended to mapping our roles to roles as defined over on the source system which ensures that we can integrate with many of our customer's SSO offerings and allow them to manage user access through their own roles-based access controls.

Audit information for users

Access to user activity audit information
Users have access to real-time audit information
How long user audit data is stored for
User-defined
Access to supplier activity audit information
Users have access to real-time audit information
How long supplier audit data is stored for
User-defined
How long system logs are stored for
User-defined

Standards and certifications

ISO/IEC 27001 certification
No
ISO 28000:2007 certification
No
CSA STAR certification
No
PCI certification
No
Cyber essentials
Yes
Cyber essentials plus
Yes
Other security certifications
No

Security governance

Named board-level person responsible for service security
Yes
Security governance certified
No
Security governance approach
All systems and components utilise end-to-end encryption. All data is encrypted at rest. Audits records are signed in a chain to ensure a tamper evident solution. Audits can be exported to customers own audit analysis and reporting solution. Each integration contract includes a description of what that integration is doing, what it is for and provides metadata allowing the contract to be linked to data sharing agreements. Each contract can be reviewed and approved by IG officer using the management portal before processing is allowed
Information security policies and processes
We are currently working to attain ISO27001 accreditation, risk registers exist for each project and are updated regularly. Change processes exist to ensure that customers are made aware of any changes. Components always use protocols and services which ensure that data is encrypted both in transit and at rest.

Development are thoroughly tested prior to release with test plans updated to reflect new and emerging threats and vulnerabilities this is built into the deployment pipeline as releases move from Development environments to Quality Assurence environments before progressing to User Acceptance and production envrionments.

HD Labs also have the following certifications

- Certified Cyber Essentials Plus, registration IASME-CEP-007590.

- Registered with the Information Commissioners Office (ICO), registration ZA794790

- Registered with NHS Data Security Protection Toolkit (DSPT), organisation code 8KL37

HD Labs also have executive board nominations for Caldicott Guardian; Senior Information Risk Officer (SIRO); Information Governance Lead and Data Protection Officer (DPO)

Operational security

Configuration and change management standard
Supplier-defined controls
Configuration and change management approach
HD Labs will work with the organisation in advance of any changes and jointly assess:

- the change itself.
- the risk of not implementing the change.
- the back out plans of any change to ensure that the service or system can be rolled back (restored) to its pre-change status.

HD Labs utilise an agile product development process and visibility of upcoming changes are visible to the wider service teams well in advance of release schedules.

All Configuration and Changes are sorce controlled including IaC
Vulnerability management type
Supplier-defined controls
Vulnerability management approach
We utilise automated documentation tools to assist in the documentation of software used by our solution, this includes container images and pre-requisites to prevent differences between environments.

The deployment pipeline includes automated test scripts to check for vulnerabilities. The CTO and Product Delivery roles also subscribe to sites including the National Cyber Security Center (NCSC), OWASP and NHS digital

Vulnerabilities are raised on the project customer risk registers. We investigate the severity of the vulnerability on our software and again update the risk register with advice and mitigation plans and ensure scheduled remediation.
Protective monitoring type
Supplier-defined controls
Protective monitoring approach
The infrastructure will be continuously monitored using a number of tools specifically focusing on cluster and queue performance and associated actions when communicating between the source and destination systems, and workflow actions.
Infrastructure is monitored in terms of capacity and that as the Organisations dataflow expands and therefore the volume of referral traffic increases that the infrastructure is monitored to ensure appropriate capacity.
Incident management type
Supplier-defined controls
Incident management approach
The Technical Service agents are responsible for logging each initial contact as a “case” on the Jira Service Management application, they also provide an escalation to other team members in Development and Infrastructure.

Technical Service Agents are responsible for managing the “case” through to resolution with the customer and where necessary will call on expertise in other teams to help with resolution as well as escalate cases to managers as appropriate. We will refer the call to the relevant support desks where appropriate such as supplying or consuming data systems

Secure development

Approach to secure software development best practice
Conforms to a recognised standard, but self-assessed

Public sector networks

Connection to public sector networks
Yes
Connected networks
  • NHS Network (N3)
  • Health and Social Care Network (HSCN)

Social Value

Fighting climate change

Fighting climate change

The CHECK suite is designed to run on the Cloud, including public cloud infrastructure technology, which lowers energy consumption costs through greater economies of scale at a data centre compared to a private datacentre.
Additionally, through the orchestration of a microservices model run through open source components, infrastructure can scale down when not required and thus does not consume energy when running at idle with no data throughput.
At HD Labs we have sought to minimise the memory and compute usage by choosing those technologies that have proven to be lower in memory consumption when in use and thus have lower energy usage.
Covid-19 recovery

Covid-19 recovery

CHECK provides tools to measure and enhance staff and patient wellbeing by identifying their readiness to engage in interventions and offering the ability to customise behavioural prompts and nudges to direct them to appropriate support whether that be peer, social prescribing, employment assistance or via health professional. The CHECK suite ensures citizens who are waiting for elective care can be triaged and prioritised in a way that maximises their health outcomes and makes best use of NHS resources and funds.
Tackling economic inequality

Tackling economic inequality

The CHECK suite can provide analytics to underpin Population Health Management, improving case-finding of vulnerable individuals and tailoring interventions which will help them recover their ability to contribute economically and socially.
Equal opportunity

Equal opportunity

The CHECK models are assessed for inherent bias in sourcing, processing and interpretation of data. These fairness audits help ensure that they benefit all, not just the 'average' person.
Wellbeing

Wellbeing

CHECK provides tools to measure and enhance staff and patient wellbeing by identifying their readiness to engage in interventions and offering the ability to customise behavioural prompts and nudges to direct them to appropriate support whether that be peer, social prescribing, employment assistance or via health professional. The HDL way of working is grounded in coproduction so that any application is designed and delivered with stakeholders to build credibility and ownership, and transparently, to engender trust.

Pricing

Price
£2,000.00 to £12,500.00 an instance a month
Discount for educational organisations
Yes
Free trial available
No

Service documents

Request an accessible format
If you use assistive technology (such as a screen reader) and need versions of these documents in a more accessible format, email the supplier at gareth.roberts@hilltopdigitallab.com. Tell them what format you need. It will help if you say what assistive technology you use.