Custom Variables Guide#

AI-generated content

This page was generated by AI and has not been fully reviewed by a human. Content may be inaccurate or incomplete. If you find any issues, please create an issue on the GitHub repository.

This guide explains how to create fully custom variables in CORR-Vars, including how to write and use custom Python functions with the py argument.

Understanding Variable Architecture#

CORR-Vars supports several levels of variable customization:

  1. Pre-defined Variables: Use existing variables from the catalog

  2. Custom Aggregations: Create NativeStatic/DerivedStatic with custom expressions

  3. Native Variables: Extract new data directly from databases (source-specific)

  4. Derived Variables: Process existing variables with custom Python functions

  5. Source-Specific Variables: Define new database extractions for specific sources

Important Distinction: - Native Variables (CUB-HDP, reprodICU): Extract NEW data from databases using SQL queries - Derived Variables: Process EXISTING variables that are already loaded into the cohort

Variable Type Comparison#

Variable Type

Data Source

Purpose

Class to Use

NativeDynamic

Database query

Time-series from DB

sources.cub_hdp.Variable (dynamic=True)

Complex (DB)

Database query (executed in py)

Either single value or time-series (specified by `dynamic)

sources.cub_hdp.Variable (dynamic=True|False)

NativeStatic (Aggregation)

ONE existing dynamic variable (NO DB query)

Simple aggregation (!first, !max, etc.)

sources.aggregation.NativeStatic

DerivedStatic

Multiple existing variables (NO DB query)

Complex single value calculation

sources.aggregation.DerivedStatic

DerivedDynamic

Multiple existing variables (NO DB query)

Complex time-series calculation

sources.aggregation.DerivedDynamic

Derived Variables with Custom Functions (py Parameter)#

The most powerful way to create custom derived variables is by providing a Python function to the py parameter. These variables process existing data (from requires) rather than extracting new data from databases.

Function Signature#

Your custom function must follow this signature:

def my_custom_function(var, cohort):
    """
    Custom variable calculation function.

    Args:
        var: The Variable object containing metadata
        cohort: The Cohort object with access to all data

    Returns:
        polars.DataFrame: Processed data for this variable
    """
    # Your logic here
    return result_dataframe

Basic Example: DerivedDynamic (Time-Series Output)#

Let’s create a variable that calculates the shock index (heart rate / systolic blood pressure):

import polars as pl
from corr_vars import Cohort
from corr_vars.sources.aggregation import DerivedDynamic

def calculate_shock_index(var, cohort):
    """Calculate shock index from heart rate and blood pressure."""

    # Access required variables from the variable object
    hr_var = var.required_vars["heart_rate"]
    sbp_var = var.required_vars["blood_pressure_sys"]

    # Get the data (already extracted by CORR-Vars)
    hr_data = hr_var.data
    sbp_data = sbp_var.data

    # Join the datasets on common keys and time
    merged = hr_data.join(
        sbp_data,
        on=["icu_stay_id", "recordtime"],
        how="inner",
        suffix="_sbp"
    )

    # Calculate shock index
    result = merged.with_columns([
        (pl.col("value") / pl.col("value_sbp")).alias("value")
    ]).select([
        "icu_stay_id", "recordtime", "value"
    ])

    return result

# Create the DERIVED variable (processes existing data)
shock_index_var = DerivedDynamic(
    var_name="shock_index_dynamic",
    requires=["heart_rate", "blood_pressure_sys"],  # Existing variables
    py=calculate_shock_index,  # Your custom function
    py_ready_polars=True,  # Function accepts polars DataFrames
    cleaning={"value": {"low": 0.1, "high": 5.0}}  # Optional cleaning
)

# Add to cohort
cohort = Cohort(obs_level="icu_stay", load_default_vars=False)
# First add the required variables
cohort.add_variable("heart_rate")
cohort.add_variable("blood_pressure_sys")
# Then add the derived variable
cohort.add_variable(shock_index_var)

Simple Aggregations: NativeStatic#

For simple aggregations of a single dynamic variable, use NativeStatic instead of writing custom functions:

from corr_vars.sources.aggregation import NativeStatic

# Get first blood pressure measurement
first_bp = NativeStatic(
    var_name="first_blood_pressure_sys",
    select="!first value",
    base_var="blood_pressure_sys"  # Must be an existing dynamic variable
)

# Get maximum heart rate in first 24 hours
max_hr_24h = NativeStatic(
    var_name="max_heart_rate_24h",
    select="!max value",
    base_var="heart_rate",
    tmin="icu_admission",
    tmax=("icu_admission", "+24h")
)

# Get blood pressure closest to admission
admission_bp = NativeStatic(
    var_name="admission_blood_pressure",
    select="!closest(icu_admission, 0, 2h) value",
    base_var="blood_pressure_sys"
)

# Add to cohort (base variable must exist first)
cohort = Cohort(obs_level="icu_stay", load_default_vars=False)
cohort.add_variable("blood_pressure_sys")  # Add base variable first
cohort.add_variable("heart_rate")

# Then add aggregated variables
cohort.add_variable(first_bp)
cohort.add_variable(max_hr_24h)
cohort.add_variable(admission_bp)

Available NativeStatic Aggregation Functions: - !first [columns] - First recorded value - !last [columns] - Last recorded value - !max [column] - Maximum value - !min [column] - Minimum value - !mean [column] - Mean value - !median [column] - Median value - !count [column] - Count of measurements - !any - True if any value exists - !closest(reference, offset, tolerance) [columns] - Value closest to reference time

Advanced Example: DerivedStatic (Single Value Output)#

Here’s a more complex example calculating a simplified SOFA score:

from corr_vars.sources.aggregation import DerivedStatic

def calculate_sofa_score(var, cohort):
    """Calculate SOFA score from multiple components."""

    # Access all required variables
    creat_var = var.required_vars["blood_creatinine"]
    plt_var = var.required_vars["blood_platelets"]
    bili_var = var.required_vars["blood_bilirubin"]
    pf_var = var.required_vars["pf_ratio"]

    # Start with cohort obs data (patient identifiers)
    result = cohort.obs.select(["icu_stay_id"]).clone()

    # Function to score individual components
    def score_creatinine(creat_value):
        return pl.when(creat_value < 1.2).then(0)\
                .when(creat_value < 2.0).then(1)\
                .when(creat_value < 3.5).then(2)\
                .when(creat_value < 5.0).then(3)\
                .otherwise(4)

    def score_platelets(plt_value):
        return pl.when(plt_value >= 150).then(0)\
                .when(plt_value >= 100).then(1)\
                .when(plt_value >= 50).then(2)\
                .when(plt_value >= 20).then(3)\
                .otherwise(4)

    def score_bilirubin(bili_value):
        return pl.when(bili_value < 1.2).then(0)\
                .when(bili_value < 2.0).then(1)\
                .when(bili_value < 6.0).then(2)\
                .when(bili_value < 12.0).then(3)\
                .otherwise(4)

    def score_pf_ratio(pf_value):
        return pl.when(pf_value >= 400).then(0)\
                .when(pf_value >= 300).then(1)\
                .when(pf_value >= 200).then(2)\
                .when(pf_value >= 100).then(3)\
                .otherwise(4)

    # Get admission values for each component
    creat_admission = creat_var.data.group_by("icu_stay_id").agg(
        pl.col("value").first().alias("creat_admission")
    )

    plt_admission = plt_var.data.group_by("icu_stay_id").agg(
        pl.col("value").first().alias("plt_admission")
    )

    bili_admission = bili_var.data.group_by("icu_stay_id").agg(
        pl.col("value").first().alias("bili_admission")
    )

    pf_admission = pf_var.data.group_by("icu_stay_id").agg(
        pl.col("value").first().alias("pf_admission")
    )

    # Join all components
    result = result\
        .join(creat_admission, on="icu_stay_id", how="left")\
        .join(plt_admission, on="icu_stay_id", how="left")\
        .join(bili_admission, on="icu_stay_id", how="left")\
        .join(pf_admission, on="icu_stay_id", how="left")

    # Calculate component scores
    result = result.with_columns([
        score_creatinine(pl.col("creat_admission")).alias("creat_score"),
        score_platelets(pl.col("plt_admission")).alias("plt_score"),
        score_bilirubin(pl.col("bili_admission")).alias("bili_score"),
        score_pf_ratio(pl.col("pf_admission")).alias("pf_score")
    ])

    # Calculate total SOFA score
    result = result.with_columns([
        (pl.col("creat_score") + pl.col("plt_score") +
         pl.col("bili_score") + pl.col("pf_score")).alias("sofa_score_admission")
    ])

    # Return final result with standard columns
    return result.select(["icu_stay_id", "sofa_score_admission"])

# Create the SOFA score variable as DerivedStatic
sofa_var = DerivedStatic(
    var_name="sofa_score_custom",
    requires=["blood_creatinine", "blood_platelets", "blood_bilirubin", "pf_ratio"],
    py=calculate_sofa_score,
    py_ready_polars=True
)

# Add to cohort
cohort = Cohort(obs_level="icu_stay", load_default_vars=False)
# First add required variables
for req_var in ["blood_creatinine", "blood_platelets", "blood_bilirubin", "pf_ratio"]:
    cohort.add_variable(req_var)
# Then add the derived variable
cohort.add_variable(sofa_var)

Working with Pandas (Legacy Mode)#

If you prefer working with pandas or have existing pandas code:

def pandas_based_function(var, cohort):
    """Example function using pandas instead of polars."""

    # Access required data (will be converted to pandas automatically)
    lactate_data = var.required_vars["blood_lactate"].data  # pandas DataFrame
    cohort_data = cohort.obs  # pandas DataFrame

    # Your pandas logic here
    result = lactate_data.groupby("icu_stay_id").agg({
        "value": ["first", "last", "max", "count"]
    }).reset_index()

    # Flatten column names
    result.columns = ["icu_stay_id", "first_lactate", "last_lactate", "max_lactate", "count_lactate"]

    # Calculate derived metrics
    result["lactate_clearance"] = (result["first_lactate"] - result["last_lactate"]) / result["first_lactate"]

    return result

# Create variable with pandas function as DerivedStatic
from corr_vars.sources.aggregation import DerivedStatic

lactate_clearance_var = DerivedStatic(
    var_name="lactate_clearance",
    requires=["blood_lactate"],
    py=pandas_based_function,
    py_ready_polars=False  # Function uses pandas
)

Native Variables (Database Extraction)#

Native variables extract NEW data from databases using SQL queries. These are source-specific and defined differently for each data source.

CUB-HDP Native Variables#

Native variables for CUB-HDP extract data directly from the Impala database tables:

from corr_vars.sources.cub_hdp import Variable

# Example: Extract a new laboratory variable
custom_lab_var = Variable(
    var_name="blood_custom_marker",
    table="it_ishmed_labor",  # Database table
    where="c_katalog_leistungtext LIKE '%custom marker%' AND c_wert <> '0'",  # SQL filter
    value_dtype="DOUBLE",  # Data type
    dynamic=True,  # Time-series data
    cleaning={"value": {"low": 0.1, "high": 100.0}}  # Value validation
)

# Example: Extract therapy events
custom_therapy_var = Variable(
    var_name="custom_therapy",
    table="it_copra6_therapy",  # Therapy table
    where="c_apparat_mode LIKE '%custom device%'",
    value_dtype="VARCHAR",
    dynamic=True,
    complex=False  # Simple extraction, no custom function needed
)

# Add to cohort
cohort = Cohort(obs_level="icu_stay", load_default_vars=False)
cohort.add_variable(custom_lab_var)
cohort.add_variable(custom_therapy_var)

Key Parameters for Native Variables: - table: Database table to query - where: SQL WHERE clause to filter rows - value_dtype: SQL data type (DOUBLE, VARCHAR, BOOLEAN, etc.) - dynamic: True for time-series, False for static data - cleaning: Optional value range validation

Complex Native Variables with Custom Functions#

For complex database extractions that require custom processing after extraction, you can use the py parameter with Native variables. The system first extracts raw data from the database using your table and where clause, then passes that data to your custom function for processing.

Example 1: Processing Extracted Laboratory Data

def process_blood_gas_data(var, cohort):
    """Process extracted blood gas data with custom logic."""

    # var.data contains the raw extracted data from the database
    # This was extracted using the table/where clause you provided
    raw_data = var.data

    # Apply custom processing logic to categorize blood gas values
    processed = raw_data.with_columns([
        # Parse and categorize pH values
        pl.when(pl.col("value").cast(pl.Float64) < 7.35).then("acidosis")
        .when(pl.col("value").cast(pl.Float64) > 7.45).then("alkalosis")
        .otherwise("normal")
        .alias("ph_category")
    ]).with_columns([
        # Keep original value as well
        pl.col("value").cast(pl.Float64).alias("ph_value")
    ])

    return processed.select(["icu_stay_id", "recordtime", "ph_value", "ph_category"])

# Native variable with custom processing (first extracts, then processes)
blood_gas_processed = Variable(
    var_name="blood_gas_ph_processed",
    table="it_ishmed_labor",  # Database table to extract from
    where="c_katalog_leistungtext LIKE '%pH%' AND c_wert <> ''",  # SQL filter
    value_dtype="VARCHAR",  # Raw data type from database
    dynamic=True,
    complex=True,  # Indicates custom processing needed
    py=process_blood_gas_data,  # Custom function processes the extracted data
    py_ready_polars=True
)

Example 2: Real Variable from vars.json - Glasgow Coma Score

Here’s how a real complex variable like glasgow_coma_score works (from the existing variable catalog):

def glasgow_coma_score_calculation(var, cohort):
    """
    Process extracted GCS components into total score.
    The raw data comes from database extraction of GCS components.
    """

    # Raw data was extracted from database using predefined table/where
    # This function processes it into meaningful GCS scores
    raw_gcs_data = var.data

    # Custom logic to parse and calculate total GCS
    # (Simplified example - actual implementation is more complex)
    processed = raw_gcs_data.with_columns([
        # Parse verbal, motor, and eye response components
        # and calculate total GCS score
        pl.when(pl.col("description").str.contains("Eye"))
        .then(pl.col("value").str.extract(r"(\d+)").cast(pl.Int32))
        .alias("eye_score")
    ]).group_by(["icu_stay_id", "recordtime"]).agg([
        pl.col("eye_score").sum().alias("total_gcs")
    ])

    return processed

# This is how glasgow_coma_score is defined in vars.json:
# {
#     "glasgow_coma_score": {
#         "type": "complex",
#         "dynamic": true,
#         "py_ready_polars": true
#     }
# }

# When you use it:
cohort.add_variable("glasgow_coma_score")
# 1. System extracts raw GCS data from database (predefined table/where)
# 2. Passes extracted data to glasgow_coma_score_calculation() function
# 3. Returns processed GCS scores

Example 3: Body Weight with Multiple Data Sources

def body_weight_calculation(var, cohort):
    """
    Process weight data from multiple database sources and pick the best value.
    """

    # Raw weight data extracted from multiple database tables
    raw_weights = var.data

    # Custom logic to prioritize and clean weight measurements
    processed = raw_weights.with_columns([
        # Prioritize certain measurement sources
        pl.when(pl.col("source_table") == "copra_vital_signs")
        .then(pl.col("value").cast(pl.Float64) * 1.2)  # Priority weighting
        .otherwise(pl.col("value").cast(pl.Float64))
        .alias("weighted_value")
    ]).filter(
        # Remove implausible weights
        (pl.col("weighted_value") >= 30) & (pl.col("weighted_value") <= 300)
    ).group_by("icu_stay_id").agg([
        # Take the median weight per patient
        pl.col("weighted_value").median().alias("body_weight_kg")
    ])

    return processed

# This corresponds to "body_weight" in vars.json:
# {
#     "body_weight": {
#         "type": "complex",
#         "dynamic": false,
#         "py_ready_polars": true
#     }
# }

Key Points for Complex Native Variables:

  1. Database Extraction First: The system uses predefined table and where clauses to extract raw data

  2. Then Custom Processing: Your py function receives this extracted data for processing

  3. Function Input: var.data contains the raw database extraction results

  4. Function Output: Return processed data in the expected format

  5. Configuration: Set complex=True and provide your py function

Source-Specific Variable Definitions#

For variables that will be reused, you can add them to the source-specific variable definitions.

Adding to CUB-HDP Variable Registry#

Step 1: Define your function in the variables module:

# File: src/corr_vars/sources/cub_hdp/mapping/variables.py

def my_custom_score(var, cohort):
    """Calculate a custom clinical score."""
    # Your function implementation here
    pass

Step 2: Add to the variable configuration:

# File: src/corr_vars/sources/cub_hdp/mapping/vars.json

{
    "variables": {
        "my_custom_score": {
            "type": "complex",
            "requires": ["blood_pressure_sys", "heart_rate", "age_on_admission"],
            "dynamic": false,
            "description": "Custom clinical severity score"
        }
    }
}

Step 3: Use the variable:

# Now you can use it like any other variable
cohort.add_variable("my_custom_score")

Creating Project-Specific Variables#

For research project-specific variables, use the cohort’s project variables:

# Define your function
def project_specific_calculation(var, cohort):
    # Your calculation logic
    pass

# Add to cohort's project variables
cohort.add_variable_definition("my_project_var", {
    "type": "complex",
    "requires": ["var1", "var2"],
    "dynamic": False,
    "py": project_specific_calculation
})

# Now use it
cohort.add_variable("my_project_var")

Best Practices for Custom Functions#

Error Handling#

def robust_custom_function(var, cohort):
    """Example with proper error handling."""

    try:
        # Check if required variables are available
        required_vars = ["blood_pressure_sys", "heart_rate"]
        for req_var in required_vars:
            if req_var not in var.required_vars:
                raise ValueError(f"Required variable {req_var} not available")

            if var.required_vars[req_var].data is None:
                raise ValueError(f"No data for required variable {req_var}")

        # Your calculation logic
        result = perform_calculation(var, cohort)

        # Validate result
        if result is None or len(result) == 0:
            raise ValueError("Function returned empty result")

        return result

    except Exception as e:
        print(f"Error in custom function {var.var_name}: {e}")
        # Return empty DataFrame with correct structure
        return pl.DataFrame({"icu_stay_id": [], "value": []})

Data Validation#

def validated_calculation(var, cohort):
    """Example with data validation."""

    # Get input data
    input_data = var.required_vars["blood_glucose"].data

    # Validate input data quality
    if len(input_data) == 0:
        print(f"Warning: No data for {var.var_name}")
        return pl.DataFrame({"icu_stay_id": [], "glucose_category": []})

    # Check for reasonable value ranges
    valid_data = input_data.filter(
        (pl.col("value") >= 20) & (pl.col("value") <= 800)  # mg/dL
    )

    if len(valid_data) < 0.8 * len(input_data):
        print(f"Warning: {len(input_data) - len(valid_data)} invalid glucose values removed")

    # Perform calculation on validated data
    result = valid_data.with_columns([
        pl.when(pl.col("value") < 70).then("hypoglycemia")
        .when(pl.col("value") < 140).then("normal")
        .when(pl.col("value") < 200).then("hyperglycemia")
        .otherwise("severe_hyperglycemia")
        .alias("glucose_category")
    ])

    return result.select(["icu_stay_id", "recordtime", "glucose_category"])

Performance Optimization#

def optimized_calculation(var, cohort):
    """Example with performance optimizations."""

    # Use lazy evaluation when possible
    input_data = var.required_vars["large_dataset"].data.lazy()

    # Perform operations on lazy frame
    result = input_data.group_by("icu_stay_id").agg([
        pl.col("value").mean().alias("mean_value"),
        pl.col("value").max().alias("max_value"),
        pl.col("value").count().alias("count_measurements")
    ])

    # Only materialize when needed
    return result.collect()

Debugging Custom Functions#

def debug_custom_function(var, cohort):
    """Example with debugging information."""

    print(f"Processing variable: {var.var_name}")
    print(f"Required variables: {var.requires}")
    print(f"Cohort size: {len(cohort.obs)}")

    for req_var_name, req_var in var.required_vars.items():
        print(f"  {req_var_name}: {len(req_var.data) if req_var.data is not None else 0} rows")

    # Your calculation logic with intermediate debugging
    intermediate_result = step_1_calculation()
    print(f"After step 1: {len(intermediate_result)} rows")

    final_result = step_2_calculation(intermediate_result)
    print(f"Final result: {len(final_result)} rows")

    return final_result

Testing Custom Functions#

Always test your custom functions thoroughly:

# Create a test cohort
test_cohort = Cohort(
    obs_level="icu_stay",
    load_default_vars=False,
    sources={"cub_hdp": {"filters": "_d1"}}  # Small dataset for testing
)

# Add required variables
test_cohort.add_variable("blood_pressure_sys")
test_cohort.add_variable("heart_rate")

# Test your custom variable
try:
    test_cohort.add_variable(your_custom_variable)
    print("✓ Custom variable loaded successfully")

    # Check results
    result_data = test_cohort.obs["your_variable_name"]
    print(f"✓ Generated {result_data.null_count()} values, {result_data.null_count()} missing")

except Exception as e:
    print(f"✗ Error testing custom variable: {e}")

Common Patterns and Examples#

Time-Based Aggregations#

from corr_vars.sources.aggregation import DerivedDynamic

def rolling_average_function(var, cohort):
    """Calculate 6-hour rolling average."""

    data = var.required_vars["blood_lactate"].data

    # Sort by time and calculate rolling mean
    result = data.sort(["icu_stay_id", "recordtime"]).with_columns([
        pl.col("value").rolling_mean(window_size="6h", by="recordtime")
        .over("icu_stay_id")
        .alias("value")  # Standard column name
    ])

    return result.select(["icu_stay_id", "recordtime", "value"])

# Create as DerivedDynamic
rolling_lactate = DerivedDynamic(
    var_name="lactate_rolling_6h",
    requires=["blood_lactate"],
    py=rolling_average_function,
    py_ready_polars=True
)

Multi-Variable Calculations#

from corr_vars.sources.aggregation import DerivedDynamic

def calculate_map_function(var, cohort):
    """Calculate Mean Arterial Pressure from systolic and diastolic."""

    sys_data = var.required_vars["blood_pressure_sys"].data
    dia_data = var.required_vars["blood_pressure_dia"].data

    # Join on time (within 5 minutes)
    joined = sys_data.join_asof(
        dia_data.rename({"value": "dia_value"}),
        left_on="recordtime",
        right_on="recordtime",
        by="icu_stay_id",
        tolerance="5m"
    )

    # Calculate MAP = (2*diastolic + systolic) / 3
    result = joined.with_columns([
        ((2 * pl.col("dia_value") + pl.col("value")) / 3).alias("value")
    ])

    return result.select(["icu_stay_id", "recordtime", "value"])

# Create as DerivedDynamic
map_calculated = DerivedDynamic(
    var_name="blood_pressure_mean_calculated",
    requires=["blood_pressure_sys", "blood_pressure_dia"],
    py=calculate_map_function,
    py_ready_polars=True,
    cleaning={"value": {"low": 30, "high": 150}}  # Physiological range
)

## Summary

This comprehensive guide covers two main types of custom variables:

### Native Variables (New Database Extractions) - Use Case: Extract NEW data from database tables that isn’t already available - Class: corr_vars.sources.cub_hdp.Variable (or other source-specific classes) - Key Parameters: table, where, value_dtype, dynamic - Example: Extract a new laboratory marker, therapy events, or diagnostic codes

### Aggregation Variables (Process Existing Data) - Use Case: Calculate NEW variables from existing variables already in the cohort - Classes:

  • NativeStatic: Simple aggregations of ONE dynamic variable (!first, !last, !max, !mean, etc.)

  • DerivedStatic: Complex calculations from multiple variables (e.g., SOFA score)

  • DerivedDynamic: Time-series output from multiple variables (e.g., shock index over time)

  • Key Parameters: - NativeStatic: base_var (one dynamic variable), select (aggregation function) - Derived*: requires (list of existing variables), py (custom function)

  • Example: First blood pressure, calculated clinical scores, complex transformations

### Decision Tree: Which Type to Use?

Do you need NEW data from the database?
├── YES → Use Native Variable (cub_hdp.Variable)
│   ├── Time-series? → dynamic=True
│   └── Single value? → dynamic=False
│
└── NO → Process existing variables
    ├── Simple aggregation of ONE dynamic variable?
    │   └── YES → Use NativeStatic (!first, !last, !max, !mean, etc.)
    │
    └── Complex processing or multiple variables?
        ├── Time-series output? → DerivedDynamic
        └── Single value output? → DerivedStatic

### Quick Reference for Variable Selection:

Need NEW data from database?Native Variable (cub_hdp.Variable) - Simple extraction → dynamic=True/False, provide table + where - Complex processing → Add complex=True + py=your_function

Processing existing variables?Choose based on complexity: - Simple aggregation of ONE variableNativeStatic (!first, !max, !mean, etc.) - Complex calculation from multiple variablesDerivedStatic or DerivedDynamic + py=your_function

Remember to start simple and gradually add complexity as you become more comfortable with the framework!