Satisfies Expression
Definition
Evaluates the given expression (any valid Spark SQL) for each record.
In-Depth Overview
The Satisfies Expression
rule allows for a wide range of custom validations on the dataset. By defining a Spark SQL expression, you can create customized conditions that the data should meet.
This rule will evaluate an expression against each record, marking those that do not satisfy the condition as anomalies. It provides the flexibility to create complex validation logic without being restricted to predefined rule structures.
Field Scope
Calculated: The rule automatically identifies the fields involved, without requiring explicit field selection.
General Properties
Name | Supported |
---|---|
Filter Allows the targeting of specific data based on conditions |
|
Coverage Customization Allows adjusting the percentage of records that must meet the rule's conditions |
The filter allows you to define a subset of data upon which the rule will operate.
It requires a valid Spark SQL expression that determines the criteria rows in the DataFrame should meet. This means the expression specifies which rows the DataFrame should include based on those criteria. Since it's applied directly to the Spark DataFrame, traditional SQL constructs like WHERE clauses are not supported.
Examples
Direct Conditions
Simply specify the condition you want to be met.
Combining Conditions
Combine multiple conditions using logical operators like AND
and OR
.
Correct usage
Incorrect usage
Utilizing Functions
Leverage Spark SQL functions to refine and enhance your conditions.
Correct usage
Incorrect usage
Using scan-time variables
To refer to the current dataframe being analyzed, use the reserved dynamic variable {{_qualytics_self}}
.
Correct usage
Incorrect usage
While subqueries can be useful, their application within filters in our context has limitations. For example, directly referencing other containers or the broader target container in such subqueries is not supported. Attempting to do so will result in an error.
Important Note on {{_qualytics_self}}
The {{_qualytics_self}}
keyword refers to the dataframe that's currently under examination. In the context of a full scan, this variable represents the entire target container. However, during incremental scans, it only reflects a subset of the target container, capturing just the incremental data. It's crucial to recognize that in such scenarios, using {{_qualytics_self}}
may not encompass all entries from the target container.
Specific Properties
Evaluates each record against a specified Spark SQL expression to ensure it meets custom validation conditions.
Name | Description |
---|---|
Expression |
Defines the Spark SQL expression that each record should meet. |
Info
Refers to the Filter Guide in the General Properties topic for examples of valid Spark SQL expressions.
Anomaly Types
Type | Supported |
---|---|
Record Flag inconsistencies at the row level |
|
Shape Flag inconsistencies in the overall patterns and distributions of a field |
Example
Objective: Ensure that the total tax applied to each item in the LINEITEM table is not more than 10% of the extended price.
Sample Data
L_ORDERKEY | L_LINENUMBER | L_EXTENDEDPRICE | L_TAX |
---|---|---|---|
1 | 1 | 10000 | 900 |
2 | 1 | 15000 | 2000 |
3 | 1 | 20000 | 1800 |
4 | 1 | 10000 | 1500 |
Inputs
- Expression: L_TAX <= L_EXTENDEDPRICE * 0.10
Anomaly Explanation
In the sample data above, the entries with L_ORDERKEY
2 and 4 do not satisfy the rule because the L_TAX
values are more than 10% of their respective L_EXTENDEDPRICE
values.
graph TD
A[Start] --> B[Retrieve L_EXTENDEDPRICE and L_TAX]
B --> C{Is L_TAX <= L_EXTENDEDPRICE * 0.10?}
C -->|Yes| D[Move to Next Record/End]
C -->|No| E[Mark as Anomalous]
E --> D
Potential Violation Messages
Record Anomaly
The record does not satisfy the expression: L_TAX <= L_EXTENDEDPRICE * 0.10
Shape Anomaly
50.000% of 4 filtered records (2) do not satisfy the expression: L_TAX <= L_EXTENDEDPRICE * 0.10