This just involves installing TeselaGen Python Client package.
# This installs the 'teselagen' client python package.
#!pip3 install teselagen==0.3.2
This notebooks shows how to use TeselaGen's Python TEST Client in order to connect to TeselaGen TEST Module through its REST API.
The data used throughout this Notebook is publicly available at ABF Multiomics Paper Github Repo
import requests
import platform
import io
import pandas as pd
from pprint import pprint
import teselagen
from teselagen.api import TeselaGenClient
print(f"python version : {platform.python_version()}")
print(f"pandas version : {pd.__version__}")
print(f"teselagen version : {teselagen.__version__}")
# Connect to your teselagen instance by passing it as the 'host_url' argument of TeselaGenClient(host_url=host_url)
client = TeselaGenClient(host_url="https://platform.teselagen.com")
# The following command will promt you to type username (email) and password
client.login()
Select a Laboratory within which we'll be working. Creating a new Laboratory is done through the UI and requires an admin account.
## Fetch My Laboratories
labs = client.get_laboratories()
display(labs)
lab_id = labs[0]['id']
## Select a Laboratory
client.select_laboratory(lab_name="Test Lab")
#client.unselect_laboratory()
1) Create an experiment, this will be the scope of our files and assay measurements.
2) Create TEST metadata according to the multiomics files. These are used to map the different data file headers. The metadata records we are going to create are of type/class:
a. Descriptor Type
b. Mesurement Target
c. Assay Subject Class
d. Reference Dimension
e. Unit
Experiments are part of TEST organziational hierarchy. These belong to Laboratories and can be used to store many Assays measurements for different Assay Subjects. For the multiomics data, we are going to create an Experiment where we're going to store all of the Multiomics files, and data corresponding to the Wild Type and other Strain Subjects.
## This will create a new Experiment. The output will give as the Experiment ID that we'll be using later.
experiment_name="Multiomics data for WT Strain"
experiment = client.test.create_experiment(experiment_name=experiment_name)
print(experiment)
wt_experiment_id = experiment['id']
## This will create a new Experiment. The output will give as the Experiment ID that we'll be using later.
experiment_name="Multiomics BE strains data"
experiment = client.test.create_experiment(experiment_name=experiment_name)
print(experiment)
be_experiment_id = experiment['id']
Here we are going to create all the necessary metadata records needed according to the Multiomics Files Headers. In TEST, metadata records are strictly related to the mapping of tabular data. There are different classes or types of metadata (refer to Metadata Documentation).
One way of understanding TEST metadata records is that these are used to map (i.e., give meaning) to columns in tabular data, much like tabular headers do but in a more structured manner.
The following Notebook cells show how to create these metadata records. For each record created, an ID will be returned. These IDs will be particularly important when creating the different mappers (array of structured headers) used to import the tabular data in the multiomics files.
Descriptor types are one of TEST metadata classes/types, specifically used to identify data columns corresponding to assay subject descriptors, features or characteristics.
For the Multiomics paper, the "experiment description files" describes each Strain with a set of characteristics, and these would correspond to TEST descriptor types.
experiment_description_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_experiment_description_file_WT.csv"
experiment_description_df = pd.read_csv(experiment_description_fileurl)
experiment_description_df.head()
# Here we are going to create the necessary Descriptor Types
# that are going to be used to map the different Strains' charactetristics described in
# the experiment description files.
# The first column name is omitted, since it's the 'Line Name' which is not a descriptor but the Strain itself.
descriptorTypeNames = experiment_description_df.columns.values.tolist()[1:]
# Here we construct the 'descriptorTypes' metadata records.
# Also, we strip any leading or trailing spaces in the file header names.
descriptorTypes = [{"name": descriptorTypeName.strip()} for descriptorTypeName in descriptorTypeNames]
result = client.test.create_metadata(metadataType="descriptorType", metadataRecord=descriptorTypes)
# After creating the descriptor types, we are going to construct a mapper dictionary: 'descriptorTypeNamesToIds'
# that we will use to know the metadata descriptorType record IDs from their names.
descriptorTypeNamesToIds = {x['name']: x['id'] for x in result}
display(descriptorTypeNamesToIds)
Measurement targets are another of TEST metadata classes/types. These are used to identify different types of measurements in assay results.
For the multiomics paper, we need to create the optical density measurement target metadata record before importing optical density data.
# To create an assay subject class, we simply construct a JSON with the 'name' key as below.
measurementTarget = { "name": "Optical Density" }
result = client.test.create_metadata(metadataType="measurementTarget", metadataRecord=measurementTarget)
# Again, we here construct this auxiliary mapper dictionary: 'measurementTargetNametoIds',
# that we will use to know the metadata measurementTarget record ID from its name.
measurementTargetNametoIds = {result[0]['name']: result[0]['id']}
measurementTargetNametoIds
Assay Subject Classes is another TEST metadata class/type. In TEST each Assay Subject (or simply subject), is mapped to a subjet class or category.
In this particular case, the Subjects are the Strains, so we're going to classify them by the "Strain" assaySubjectClass that we'll create.
# To create an assay subject class, we simply construct a JSON with the 'name' key as below.
assaySubjectClass = { "name": "Strain" }
result = client.test.create_metadata(metadataType="assaySubjectClass", metadataRecord=assaySubjectClass)
# Again, we here construct this auxiliary mapper dictionary: 'assaySubjectClassNameToId',
# that we will use to know the metadata assaySubjectClass record ID from its name.
assaySubjectClassNameToId = {result[0]['name']: result[0]['id']}
display(assaySubjectClassNameToId)
Reference dimensions is again another TEST metadata class/type. In TEST, when importing assay subject measurements, these may be associated with what in TEST is known as a Reference Dimension. Simply put, a reference dimension is understood as a the independent variables of a measurement, in other words, it would represent the X-Axis dimension in a 2D Plot.
In the multiomics paper, the only reference dimension that is used is Time.
Usually, a reference dimension is measured in units of a particular unit dimension. Here time, is measured in hours (unit dimension are also a TEST metadata class).
# Here we list all the currently available reference dimensions in TEST
# And see there's already a reference dimension called 'Elapsed Time', which we'll use later on.
pprint(client.test.get_metadata(metadataType="referenceDimension"))
# We are going to store this 'Elapsed Time' ID into a variable to use later.
referenceDimensionNameToId = {'Elapsed Time': '1'}
referenceDimensionNameToId
Units are yet another TEST metadata class/type. These are used to map referenceDimensions and measurementTargets values to a particular unit. Currently, this is mandatory for every such records.
Within the Units scope, there are actually three TEST metadata classes/types: unit dimension, unit scale and unit.
Unit Dimensions: these correspond to metadata objects representing physical dimensions (e.g., Time, Volume, Concentration, etc.)
Unit Scales: these correspond to metadata objects used to group several units together into a scale or group. This is used to convert from one unit to another. However a 'dummy' scale can be constructed in case this functionality is not needed.
Finally, to fully understand TEST unit metadata classes, we need to understand that each unit is part of a unit scale, and each unit scale has a unit dimension. unit --> unit scale --> unit dimension.
In the multiomics paper, there are several units used for its different data measurements. Here, we are going to created them in order to proceed with the importing process. We are also going to create a dummy dimensionles unit scale, unit dimesion and unit for the Optical Density Measurement, which use no units (again, this may seem unnecessary but currently all measurement need to be associated a units).
# Here we list all the currently available units in TEST
pprint(client.test.get_metadata(metadataType="unit"))
# Here we list all the currently available unit scales in TEST
unitScales = client.test.get_metadata(metadataType="unitScale")
pprint(unitScales)
# First we are going to create this 'dummy' dimensionless unitDimension metadata record.
result=client.test.create_metadata(metadataType="unitDimension", metadataRecord={"name":"dimensionless"})
unitDimensionId = result[0]['id']
# Then we are going to create this 'dummy' dimensionless unitScale metadata record.
result=client.test.create_metadata(metadataType="unitScale", metadataRecord={"name":"dimensionless", "unitDimensionId": unitDimensionId})
unitScales = client.test.get_metadata(metadataType="unitScale")
# Here we just construct an auxiliary mapper dictionary that that we will use
# to know the metadata unitScale record ID from its name.
unitScalesNameToId = {unitScale['name']: unitScale['id'] for unitScale in unitScales}
pprint(unitScalesNameToId)
# The next units are used by the metabolomics, transcriptomics and proteomics dataset.
# And these three units are of type Concentration, so we'll add the to the 'Metric Concentration' unit scale.
# The fourth and last unit called 'n/a', will be used to import the Optical Density data.
client.test.create_metadata(metadataType="unit", metadataRecord=[
{"name":"mM", "unitScaleId": unitScalesNameToId['Metric Concentration']},
{"name":"FPKM", "unitScaleId": unitScalesNameToId['Metric Concentration']},
{"name":"proteins/cell", "unitScaleId": unitScalesNameToId['Metric Concentration']},
# we create here the 'n/a' unit with dimensionless (or dummy) scale.
{"name":"n/a", "unitScaleId": unitScalesNameToId['dimensionless']},
])
Now that the Laboratory has been prepared, we are ok to begin the data import process.
1) Import the strains (i.e., subjects) experiment description data stored in the "EDD_experiment_description_file_WT.csv" "EDD_experiment_description_file_BE_designs.csv" and files.
2) Import the WT strain (subject) Optical Density data stored in the "EDD_OD_WT.csv" file.
3) Import the WT strain external metabolites data storesd in the "EDD_external_metabolites_WT.csv" file.
4) Import the WT strain transcriptomics data stored in the "EDD_transcriptomics_WTSM.csv" file.
5) Import the strain proteomics data store in the "EDD_proteomics_WTSM.csv" file.
6) Import the strain metabolomics data store in the "EDD_metabolomics_WTSM.csv" file.
7) Import the strain Isoprenol Prodiuction data store in the "EDD_isoprenol_production.csv" file.
In order to import data from a tabular file into the TEST module, we need to create a mapper JSON.
Here are going to use the descriptorType IDs obtained above to construct this mapper.
In the multiomics papers, there are two experiment description files: one for the Wild Type and another one for the rest of the BE Strain designs.
We are going to benefit from the fact that both files share a very similar structure and just construct one single mapper for both of them.
# Here we read and transform the Experiment Description File for the Wild Type Strain.
wt_experiment_description_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_experiment_description_file_WT.csv"
wt_experiment_description_df = pd.read_csv(wt_experiment_description_fileurl)
wt_experiment_description_filepath = "./TEST_experiment_description_file_WT.csv"
wt_experiment_description_df.to_csv(wt_experiment_description_filepath, index=False)
wt_experiment_description_df.head()
# Here we read and transform the Experiment Description File for the BE Strains designs.
be_experiment_description_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_experiment_description_file_BE_designs.csv"
be_experiment_description_df = pd.read_csv(be_experiment_description_fileurl)
# We reorder some columns so it matches the format of the Wild Type Experiment Description file.
be_experiment_description_df.insert(0, "Line Description", be_experiment_description_df.pop(" Line Description"))
be_experiment_description_df.insert(0, "Line Name", be_experiment_description_df.pop(" Line Name"))
be_experiment_description_filepath = "./TEST_experiment_description_file_BE_designs.csv"
be_experiment_description_df.to_csv(be_experiment_description_filepath, index=False)
be_experiment_description_df.head()
# This will be our mapper JSON that we are going to construct in a way that we map the file columns accordingly.
# The mapper JSON is an array of objects. These objects are "structured" header JSON objects.
# These structured headers include the column's 'name', plus 2 other properties: "class" and "subClass" information.
# The 'class' property indicates which is the column's metadata class/type, while the "subClass" or "subClassId"
# indicates the metadata record ID of such "class".
experiment_description_mapper = list()
for column_name in experiment_description_df.columns.values.tolist():
if (column_name == "Line Name"):
structured_header = {
"name": column_name.strip(),
"class": "assaySubjectClass",
"subClassId": assaySubjectClassNameToId['Strain']
}
else:
structured_header = {
"name": column_name.strip(),
"class": "descriptorType",
"subClassId": descriptorTypeNamesToIds[column_name.strip()]
}
experiment_description_mapper.append(structured_header)
# We now have our mapper JSON that describes/maps each column in the file.
from pprint import pprint
pprint(experiment_description_mapper, indent=2)
# Now that we have the Mapper JSON constructed we can go ahead and import our data.
response = client.test.import_assay_subject_descriptors(
filepath=wt_experiment_description_filepath,
mapper=experiment_description_mapper,
)
# The response will show the import status and id
response
# Check status again
result = client.test.get_assay_subjects_descriptor_import_status(importId=response['importId'])
result
# Now that we have the Mapper JSON constructed we can go ahead and import our data.
response = client.test.import_assay_subject_descriptors(
filepath=be_experiment_description_filepath,
mapper=experiment_description_mapper
)
# The response will show the import status and id
pprint(response)
result = client.test.get_assay_subjects_descriptor_import_status(importId=response['importId'])
result
Just as it was done for the experiment description data, TEST is going to need metadata records used to create the structured header objects for the file's mapper.
wt_od_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_OD_WT.csv"
wt_od_df = pd.read_csv(wt_od_fileurl)
# Adds a "unit" column for Time
client.test.get_metadata(metadataType="unit")
wt_od_df["time units"] = "hrs"
# Updates the 'Units' column to have the dummy 'n/a' unit created above.
wt_od_df["Units"] = "n/a"
# Drops the 'Measurement Type' Columns as it provides no useful information.
wt_od_df.drop(["Measurement Type"], axis=1, inplace=True)
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_od_filepath = "./TEST_OD_WT.csv"
wt_od_df.to_csv(new_od_filepath, index=False)
wt_od_df.head()
# Now we need to construct the file's structured headers for its mapper JSON object.
wt_od_mapper = [
{
"name": "Line Name",
"class": "assaySubjectClass",
"subClass": assaySubjectClassNameToId["Strain"]
},
{
"name": "Time",
"class": "referenceDimension",
# ID of the referenceDimension metadata record.
"subClass": referenceDimensionNameToId['Elapsed Time']
},
{
"name": "Value",
"class": "measurementTarget",
# ID of the measurementTarget metadata record.
"subClass": measurementTargetNametoIds["Optical Density"]
},
{
"name": "Units",
"class": "unit",
# ID of the measurementTarget metadata record.
# This is in order to assign this "Unit" column to the Value column measurements.
"subClass": measurementTargetNametoIds["Optical Density"]
},
{
"name": "time units",
"class": "d-unit",
# ID of the referenceDimension metadata record.
# This is in order to assign this "Unit" column to the Time column measurements.
"subClass": referenceDimensionNameToId['Elapsed Time']
}
]
pprint(wt_od_mapper, indent=2)
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Wild Type Optical Density"
response = client.test.import_assay_results(
filepath=new_od_filepath,
assay_name=assay_name,
experiment_id=wt_experiment_id,
mapper=wt_od_mapper,
)
print(response)
# We see that the function returns a 'success' boolean status and the number of results inserted
# The number of results correspond to the 10 optical density measurements done on the Wild Type Strain.
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
Let's stop here for a second. The next 4 files are the Wild Type's multiomics data. These four files have an important characteristic in common, that is they all share their tabular format.
This is useful because it allows us to use the same mapper to import all of them. So let's first create such mapper object, then we'll se how we use it for the four upcoming import processes!
# We need to construct the multiomic files' structured headers for the mapper JSON object.
# Here, since the measurement targets are going to be created from the files' "Measurement Type" column values,
# ee do not specify a subClassId in the structured header of class=measurementTarget.
wt_multiomics_mapper = [
# This first element of the array corresponds to the structured header of the files's "Line Name" column.
# The four multiomic files have this column and corresponds to the assay subject column of class "Strain".
{
"name": "Line Name",
"class": "assaySubjectClass",
"subClass": assaySubjectClassNameToId["Strain"]
},
# All four multiomic files have a "Measurement Type" column. Which contains the measurement target values for
# the 'measurementTarget' metadata class.
{
"name": "Measurement Type",
"class": "measurementTarget",
},
# All four multiomic files have a "Time" column. Which represents the reference dimension class.
{
"name": "Time",
"class": "referenceDimension",
# ID of the referenceDimension metadata record.
"subClass": referenceDimensionNameToId["Elapsed Time"]
},
# All four multiomic files have a "Value" column. Which contains the measurement values for each
# measurementTarget metadata record.
{
"name": "Value",
"class": "measurementValue",
},
# All four multiomic files have a "Units" column. Which contains the unit for the measurement values for each
# measurementTarget metadata record.
{
"name": "Units",
"class": "unit",
},
# All four multiomic files have a "time units" column. Which contains the unit for the Time reference dimension.
{
"name": "time units",
"class": "d-unit",
# ID of the referenceDimension metadata record.
# This is in order to assign this "Unit" column to the Time column measurements.
"subClass": referenceDimensionNameToId["Elapsed Time"]
}
]
pprint(wt_multiomics_mapper, indent=2)
wt_ext_metabolites_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_external_metabolites_WT.csv"
wt_ext_metabolites_df = pd.read_csv(wt_ext_metabolites_fileurl)
# Adds a "unit" column for Time
client.test.get_metadata(metadataType="unit")
wt_ext_metabolites_df["time units"] = "hrs"
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_wt_ext_metabolites_filepath = "./TEST_external_metabolites_WT.csv"
wt_ext_metabolites_df.to_csv(new_wt_ext_metabolites_filepath, index=False)
wt_ext_metabolites_df.head()
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Wild Type External Metabolites"
response = client.test.import_assay_results(
filepath=new_wt_ext_metabolites_filepath,
#assay_id=assay_id,
assay_name=assay_name,
experiment_id=wt_experiment_id,
mapper=wt_multiomics_mapper,
)
# We see a response status with an import id value
print(response)
# Lets look at the results from import process
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
wt_transcriptomics_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_transcriptomics_WTSM.csv"
wt_transcriptomics_df = pd.read_csv(wt_transcriptomics_fileurl)
# Adds a "unit" column for Time
wt_transcriptomics_df["time units"] = "hrs"
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_wt_transcriptomics_filepath = "./TEST_transcriptomics_WTSM.csv"
wt_transcriptomics_df.to_csv(new_wt_transcriptomics_filepath, index=False)
wt_transcriptomics_df.head()
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Wild Type Transcriptomics"
response = client.test.import_assay_results(
filepath=new_wt_transcriptomics_filepath,
assay_name=assay_name,
experiment_id=wt_experiment_id,
mapper=wt_multiomics_mapper
)
# We see a response status with an import id value
response
# Lets look at the results from import process
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
# Read Wild Type Proteomics Assay
wt_proteomics_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_proteomics_WTSM.csv"
wt_proteomics_df = pd.read_csv(wt_proteomics_fileurl)
# Adds a "unit" column for Time
wt_proteomics_df["time units"] = "hrs"
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_wt_proteomics_filepath = "./TEST_proteomics_WTSM.csv"
wt_proteomics_df.to_csv(new_wt_proteomics_filepath, index=False)
wt_proteomics_df.head()
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Wild Type Proteomics"
response = client.test.import_assay_results(
filepath=new_wt_proteomics_filepath,
assay_name=assay_name,
experiment_id=wt_experiment_id,
mapper=wt_multiomics_mapper
)
# We see a response status with an import id value
response
# Lets look at the results from import process
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
# Read Wild Type Metabolomics Assay
wt_metabolomics_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_metabolomics_WTSM.csv"
wt_metabolomics_df = pd.read_csv(wt_metabolomics_fileurl)
# Adds a "unit" column for Time
wt_metabolomics_df["time units"] = "hrs"
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_wt_metabolomics_filepath = "./TEST_metabolomics_WTSM.csv"
wt_metabolomics_df.to_csv(new_wt_metabolomics_filepath, index=False)
wt_metabolomics_df.head()
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Wild Type Metabolomics"
response = client.test.import_assay_results(
filepath=new_wt_metabolomics_filepath,
assay_name=assay_name,
experiment_id=wt_experiment_id,
mapper=wt_multiomics_mapper
)
# We see a response status with an import id value
response
# Lets look at the results from import process
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
# Read Isoprenol Assay Results
isoprenol_fileurl = "https://raw.githubusercontent.com/AgileBioFoundry/multiomicspaper/master/data/omg_output/edd/EDD_isoprenol_production.csv"
isoprenol_df = pd.read_csv(isoprenol_fileurl)
# Adds a "unit" column for Time
isoprenol_df["time units"] = "hrs"
# Here we move the 'Time' column to the same position as the multiomics data files seen above.
# This is not necessary if we choose to use another mapper object than matches this file. But it's easier to just
# maintain the same order and recycle the multiomics mapper constructed above.
isoprenol_df.insert(2, "Time", isoprenol_df.pop("Time"))
# Now we are ready to save this updated dataframe into a new CSV file and upload it into TEST experiment scope.
new_isoprenol_filepath = "./TEST_isoprenol_production.csv"
isoprenol_df.to_csv(new_isoprenol_filepath, index=False)
isoprenol_df.head()
# Now we choose to put the assay results into an assay identified by the assay_name variable.
assay_name = "Isoprenol Production"
response = client.test.import_assay_results(
filepath=new_isoprenol_filepath,
assay_name=assay_name,
experiment_id=be_experiment_id,
mapper=wt_multiomics_mapper
)
# We see a response status with an import id value
response
# Lets look at the results from import process
result = client.test.get_assay_results_import_status(importId=response['importId'])
result
Here we are going to export the Isoprenol Production Assay, with all of the 96 Strains (95 + WT). We are going to demonstrate 2 ways of exporting the data:
assay_name = "Isoprenol Production"
assay = client.test.get_assays()
assay = list(filter(lambda x: x['name'] == assay_name, assay))
assay_id=assay[0]['id']
print(assay)
# This will return the 'CID:12988' (Isoprenol) concentration in miliMolars (mM) for every Strain.
# NOTE: Strains are identified by their Strain ID, which is auto-generated when the strain subjects were inserted.
# If more information about the Strain is needed, set the 'with_subject_data' flag to be True (as shown in the next cell).
results_wo_subject_data=client.test.get_assay_results(assay_id=assay_id, as_dataframe=True, with_subject_data=False)
results_wo_subject_data.head()
results_with_subject_data=client.test.get_assay_results(assay_id=assay_id, as_dataframe=True, with_subject_data=True)
results_with_subject_data.head()
# Download the Isoprenol Production File data from TEST Module.
file_name = 'TEST_isoprenol_production.csv'
# This function will return all the files uploaded in the current Laboratory.
files=client.test.get_files_info()
file = list(filter(lambda x: x['name'] == file_name, files))
file_id = file[0]['id']
# client.download_file function takes in a 'file_id' which can be obtain by inspecting the result of the client.get_files_info() function.
# and downloads its content.
pd.read_csv(client.test.download_file(file_id=file_id)).head()
# client.test.get_assays() function return all assays available in the current laboratory.
assays = client.test.get_assays()
display(assays[0:2])
# client.test.get_experiments() function returns all experiments available in the current laboratory.
experiments = client.test.get_experiments()
pd.DataFrame(experiments)