ET1’s DAG Streaming System

ET1’s DAG Streaming System

Familiar with graphs? How about DAGs? This is not a paradigm shift, but think of DAG as a cool way for tiny team in Austin/Dallas Texas to build an Extract Transform and Load software!

Like a guitar pedal, there’s an input and output. Sometimes it’s just an output. Then you have your input only tools. Very much like our ETL software ET1.

The DAG engine gives this small team the ability to create an ETL software with rules and futuristic features.

We leverage the same principles employed by other well-regarded Directed Acyclic Graph (DAG) tools, such as Apache Spark, Apache Airflow, Apache Beam, Kubeflow Pipelines, MLflow, TensorFlow, Dagster, Prefect, Argo Workflows, Google Cloud Composer, and Azure Data Factory, among others.

We created our own custom DAG engine using JavaScript, and this enables us to flow data downstream in a web app. Data steaming in a no-code ETL software, without a setup or install, feels like a big win for any ETL software.

In simple terms, Acyclic means not looping, this diagram/graph shows no loops.

What is a graph?

From a data perspective, a graph is a non-linear data structure used to model and store information where the relationships between individual data points are as important as the data itself. Natively a graph engine would treat data as a first class citizen, enabling real-time data processing and the ability to only compute what needs to be computed.

Unlike tables in a relational database, which store data in a fixed, row-and-column format, a graph is a flexible, interconnected network of entities and their relationships. With ET1, we fix together this graph engine so that it looks and feels like a regular ETL software, enabling a lot of cool functionality and features that regular ETL software is unable to offer.

We don’t mean to appear as we are reinventing the wheel but rather adding a different style to the typical nodes or tools you have come to learn and love.

No looping… Acyclic. Stop recycling the same rows…

Focusing solely on the detrimental effects of loops is insufficient. While infinite loops can undoubtedly disrupt systems and lead to financial losses, a more significant concern is the unnecessary recycling of data, a practice prevalent in many software applications. Why is it necessary to repeatedly query data when it is not needed? Many tools, including Tableau, Power BI, Alteryx, and Knime, inherently recycle data rows. This necessitates re-querying 100% of each table during both development and production phases, resulting in frequent downtime, managing downtime due to backfills, the constant need for backfilling, increased system strain, and a continual escalating cost. Where has the concept of incremental data loading gone?

We unblock the ability to stream data incrementally to avoid unnecessary backfilling by using this DAG system!

This is one of many reasons we feel ET1 is powerful data engineering solution.

ET1 is by law is a acyclic

Meaning not forming part of a cycle. However many ETL style tools, both in visualization and ETL, still remain with the same engine as 20 years ago. Many things have changed in 20 years. Like the ability to avoid recycling data natively.

In the data world, acyclic means no looping is possible, and from a row perspective, this is powerful because you’re always incrementally loading downstream.

This application is unable to loop back on itself. A safe behavior to avoid novice mistakes that can instantly cost a lot of money in the wrong hands.

The DAG engine, consider it a beneficial rule for an ETL software that cares about rows, most ETL software cares about columns + tables, rows become second class citizens because in database land – columns and tables are king.

These classic ETL tools constantly recycle, most allow looping, and naturally this will push more work on your systems and increase costs.

This is one of many reasons we feel the DAG engine is important, and in this diagram, 2 goes to 5 then back to 1. This isn’t possible in ET1 in the UX, and also not possible per row. This enables incremental row level refreshing. Saving time when engineering solutions, and making tweaks doesn’t cause considerable downtime ever again!

This diagram is not possible due to the rule base in the engine and not a DAG, due to the loop.

Looping still confusing as a negative? Imagine an application that could easily loop on itself, like a delay pedal that can feedback on itself, this would infinitely get louder and could destroy your ears or speakers… And from a data perspective, looping on yourself could spell a disaster for your computer, other computers, your network, your API bills, and much more… Loops would be a negative because it would allow people to break their computer and attached machines..

DAG is predictable and a great engine to use for flowing data downstream with rules, better feature sets, and enables easier visual feedback to teach end users…

Core Concept: No Play Button, Data Flows, DAG Guides, Ready?

The DAG (Directed Acyclic Graph) system is like a digital assembly line for your data, where each node is a workstation that processes data and passes it along. This changes how data is computed.

Instead of maxing out a few nodes because you’re querying all the data at once before starting a new node, each piece of your data is treated like a first class citizen in ET1.

Here’s how it works:

Is this data ready?

Yes or no?

When you go climbing, you are always talking to your partner, are they “Ready or not.” Is the person keeping you safe ready for you to fall? Are you ready? The person keeping you safe should always be ready. ET1 is always ready, so data is always flowing.

Being “always ready” is the key, DAG the bumpers to fall within, and our guide. It creates things like streaming, processing only what’s necessary, and branching off big ideas is simplistic.

Key Components

  1. Nodes – Individual processing units (like filters, joins, calculations)
  2. Edges – Connections showing data flow between nodes
  3. Data Streams – The actual data flowing through the system

How It Works

Automatic Updates

  • Change a node? The system only recalculates what’s needed downstream
  • No manual refreshing – updates happen in real-time

Smart Processing

  • Only processes changed data paths
    • Alteryx and Knime users tired of data processing unnecessarily will be excited about this feature
  • Avoids redundant calculations
    • The DAG engine lets you only calculate what changes, decreasing your compute and time spent creating solutions

Visual Flow

  • See your data transform step by step
  • Easy to spot where changes are needed
  • Intuitive drag-and-drop interface

Why ET1 is Better

  • No More Waiting: Only recalculates what’s necessary
    • Never get stuck waiting on data to re-run because you made a change, only calculate what matters, graph enables the ability to calculate one thing at a time
    • Most products have to re-calculate the entire table before it’s ready to move forward
  • Mistake-Proof: Can’t create circular references, very helpful
    • Users are unable to make big mistakes like spamming their API in an infinite loop
    • No one will be able to increase their cloud costs because they made an easy mistake
    • Exploration has no penalties, crafting a sense of trust in non-technical users
    • Decrease stress and network strains by avoiding infinite loops
  • Visual Debugging: See exactly where data changes happen, a visual teacher
    • Created to help people visually understand their data processes
    • Highlight to quickly see and understand the data automation
  • Scalable: Handles simple to complex workflows with ease

Think of it like a factory conveyor belt system – each station (node) does its job and passes the product (data) to the next station, with the system automatically managing the flow and only processing what’s needed.

Competitive analysis

Instead of constantly recycling the same rows over and over, ET1 enables anyone the ability to only compute rows that need to be updated VS re-running each table unnecessarily.

This is the tools for problem solving like KNIME, Alteryx, Tableau, Power BI, and most BI Platforms.

In most software; If your pipeline changes, you have to run 100% of the records.

ET1 defeats this with this engine.

The DAG engine introduces what we feel is great foundation for a powerful ETL tools that can scale in the future.

We believe only the data that matters should flow down stream, DAG natively supports that by design. So using this DAG engine, we are able to only flow what matters, and make problem solving feel modern.

Future outlooks

We are not married to this engine but believe it’s very beneficial thus far. Our goal is not become fixated on the engine but rather what features it can offer.

Graph means it’s easy for us to scale up to cloud or server off loading situations in the future and that’s the easy piece.

Knowing that DAG systems are the backbone of many major big data appliances, know we are thinking bigger, big picture, and next steps too.

If you have a use case that isn’t possible on your current machine, let us know.

Return to ET1 Overview to learn more.

ET1’s Aggregation Node Overview

ET1’s Aggregation Node Overview

Aggregation, what a classic. Aggregating your data is a landmark trait for any data steward, data wrangler, or data analyst. In ET1, you can easily aggregate your data.

The Power of Grouping (Group By) with the Aggregate Node

Aggregations turn a sea of numbers into meaningful insights. Group by in ET1 is nested in the aggregate node because most people will be used t

Understanding how GROUP BY works in SQL is a life saver. However ET1 gives you the same super hero powers.

The Group By Node is the foundation of your aggregation

This lets you split the information across a non-aggregating column, otherwise you’re creating a KPI.

Create your KPI, understand the number of records, and various ways to aggregate.

By default aggregation starts with count_rows to enable faster developer cycles.

๐Ÿ”ข The Essential Aggregations

  1. Sum
    • Adds up all values
    • Perfect for: Sales totals, revenue, quantities
  2. Average (Mean)
    • Finds the middle ground
    • Great for: Test scores, ratings, temperatures
  3. Minimum/Maximum
    • Spot the extremes
    • Use for: Price ranges, performance metrics, outliers
  4. Count
    • Simple but powerful
    • Tells you: How many? How often?
  5. Number of records
    • by default, you will get the “number of records”
    • you’re welcome!
  6. Count Distinct?
    • Well, count distinct is nice but…
    • This really means your data is duplicated!

๐ŸŽฏ Group By: The Game Changer

The real magic happens when you combine these with Group By:

  • Sales by Region: Group by Region, Aggregate Sum ( Revenue )

Real-World Examples

How would you aggregate in the real world?

E-commerce

  1. Total sales per city
  2. Average ticket sales per state
  3. Average order value by customer segment

Education

  • Pass rates by subject
  • Sum of students per day
  • Average of students per month per class

Finance

  • Monthly expenses by category
  • Highest spending customers

Pro Tips

  1. Start Simple – Try one aggregation at a time
  2. Clean First – Make sure it’s just numbers or you’re not aggregating
  3. Check Your Groups – Make sure your groups make sense, very powerful for data reduction

Aggregation needs to be simple. Let us know if it’s not.

Return to ET1 Overview to learn more.

ET1 Data Combination Tools

ET1 Data Combination Tools

Are you combining the data? We have you covered. ET1 has all the right tools.

The Three Musketeers of Data Combination

1. ๐Ÿค Join (The Matchmaker)

  • What it does: Combines tables based on matching values
  • Perfect for:
    • Merging customer data with their orders
    • Adding product details to sales records
    • Any “this goes with that” scenario
  • Inner join, left join, here is the tool.
  • Automatically infer columns based on headers matching

2. ๐Ÿ”— Union (The Stacker)

  • What it does: Stacks datasets with the same structure
  • Perfect for:
    • Combining monthly reports
    • Stacking related data files
    • Merging similar datasets from different sources
    • Creating master lists from multiple spreadsheets

3. ๐Ÿงต Concat ([bring], [it], [together],[with],”glue”)

Concat merges everything, and it doesn’t care about data types.

  • What it does: Merges text from different columns
  • Add a custom string between what you’re merging.
  • Perfect for:
    • Creating full names from first/last
    • Building addresses from components
    • Generating unique IDs or labels
    • Bringing together: State with City in 1 column.

Real-World Examples

Join:

  • Match customer emails with their support tickets
  • Combine product IDs with inventory details

Union:

  • Merge Q1, Q2, Q3 sales into one report
  • Combine survey responses from different regions

Concat:

  • Create “Last, First” name formats
  • Build URLs from domain + path components

Pro Tips

  • Joins work best with unique identifiers
  • Union requires matching column structures
  • Concat can add custom separators (spaces, dashes, etc.)
  • Remove duplicate records

No more copy-pasting between spreadsheets or writing complex formulas – just connect the dots and let the data flow! No strange joining tools in Excel, no learning the difference between joins, and just get your data wrangled already!

Learn more about our app: ET1 overview page.

Filtering Nodes in ET1

Filtering Nodes in ET1

The filtering nodes help you reduce the number of rows, drill into the exact information needed, and create a data set that will add value VS confuse your audience.

When filtering, remember you’re reducing the amount of data coming through the node, or you can swap to include.

Include, exclude, and ultimately work on your data.

The Filtering Nodes in ET1

1. ๐Ÿ” Any Column Filter

  • The Swiss Army Knife
    • Search across all columns at once
    • Perfect for quick data exploration
    • No setup required – just type and filter

2. ๐Ÿ“‹ Column Filter

  • The Precision Tool
    • Filter specific columns with exact matches
    • Create multiple filter conditions
    • Ideal for structured, clean data

3. ๐Ÿงฎ Measure Filter

  • The Number Cruncher
    • Filter numeric columns using conditions like:
      • Greater than/less than
      • Between ranges
      • Above/below average
    • Great for financial data, metrics, and KPIs

4. ๐ŸŒ€ Wild Headers

  • Include or exclude headers based on wildcard
    • Easily clean wide tables
    • No brainer approach to column filtering
    • Column filter is nice, but at times wild headers are king

Why This Beats Spreadsheet Hell

  1. Visual Feedback: See filtered results instantly
  2. Non-Destructive: Your original data stays safe
  3. Never recycle: You filter data unnecessarily.
  4. Stackable: Chain multiple filters for complex queries
  5. Reversible: Remove or modify filters anytime

Filtering Pro Tips

Be willing to test filters, and create branches. Then right click the beginning of the branch to duplicate the entire downstream operation. This then lets you edit filters across multiple streams of data, and see the difference between your filtering!

Start with “Any Column” to explore strings, measure filter to explore measures, then switch to specific column filters as you understand your data better, and wild headers for those edge cases where you have a lot of columns (but only a couple matter).

ET1 is built to easily filter (transform) your data. Remember, it’s like having a conversation with your dataset!

Return to ET1 Overview to learn more.

ET1’s Data Input Node Overview

ET1’s Data Input Node Overview

CSV, JSON, and Github CSVs. Also manual tables.

These help you kick start your data pipeline. ET1 helps you do that and a bit more.

Once your data comes into the data input, it begins to flow downstream using a custom DAG streaming engine.

You know the drill, data tools are very similar, it all starts with extracting your data.

But are you familiar with where your data lives? Start asking, documenting, and building your understanding of your data environment. This software will help you warehouse that information into a single canvas, without having to ask engineering for help.

Input Node Overview

The Input nodes are essential for moving the needle in ET1, without data, we are using our feelings!

  • The CSV Input node is great for getting your Comma delimited files into ET1.
  • The JSON Input node is great for getting JSON in the app, your engineering team will be happy.
  • The Github CSV is where you can get CSVs off the public internet. That’s fun. Enrich your data pipelines.
  • Manual table is great, synthesize a few rows, add table, make life easier.

The future of data inputs for ET1

We are eager to add more connections but today we are looking to keep it simple by offering CSV, JSON, Github CSV, and manual tables.

Next? Excel input perhaps.

ETL Input Nodes – Simple as Pie

๐Ÿ“Š CSV Input

  • What it does: Loads data from CSV files or text
  • Why it’s cool:
    • Drag & drop any CSV file
    • Handles messy data with smart parsing
    • Preview before committing
  • No more: Fighting with Excel imports or command-line tools

๐Ÿงพ JSON Input

  • What it does: Imports JSON data from files or direct input
  • Why it’s cool:
    • Works with nested JSON structures
    • Automatically flattens complex objects
    • Great for API responses and config files
  • No more: Writing custom parsers for every JSON format

๐Ÿ“ Manual Table

  • What it does: Create data tables by hand
  • Why it’s cool:
    • Add/remove rows and columns on the fly
    • Perfect for quick mockups or small datasets
    • Edit cells like a spreadsheet
  • No more: Creating throwaway CSV files for tiny datasets

๐Ÿ™ GitHub CSV

  • What it does: Pull CSV files directly from GitHub
  • Why it’s cool:
    • Point to any public GitHub CSV file
    • Auto-refreshes on URL change
      • A fetch button to ‘get’ it again
    • Great for github collaboration
  • No more: Downloading data, this gets it for you.

The Best Part?

No coding required.

No complex setup.

Just point, click, and start transforming your data like some data engineer, the pro.

What used to take hours (and a computer science degree) now takes seconds, and not scary…

Return to ET1 Overview to learn more.