ET1 Constant Node

ET1 Constant Node

The Constant Node creates a constant value per row in your data pipeline.

This node is extremely handy when transforming data in your ETL processes.

The Constant Node is rather straight forward, two inputs and you’re done.

Using the Constant Node

Add the Constant Node to your canvas, send data downstream to your node and open the settings.

  1. edit the constant column name or keep it default “const”
  2. add a value

The Constant Node highlights the constant column so that you’re able to easily identify the additional column.

Using the Constant Node

Sending the City, State data from the CSV Input Node to the Constant Node and adding “USA” as the Value, and “Country” as the Column header.

Thanks for learning more about ET1’s Constant Node

We appreciate you using ET1’s Constant Node, and know if you have any questions… Please contact us.

We would love to hear about your solutions.

Return to ET1 Overview to learn more.

ET1 Concat Node

ET1 Concat Node

Bring your columns together as one with the Concat Node in ET1.

This node is similar to concat() in Excel and allows you to easily bring more than 1 column together in your data pipeline, and also it gives you the ability to add the delimiter. The opposite of the Concat Node is the Split Node.

How to use Concat Node in ET1

Simply send data to the Concat Node and start setting up your node.

  1. choose columns
  2. choose separator
  3. output column name
  4. keep original columns (yes or no)

In this example we have Location and Supplier column that needs to be put together and removed from our data pipeline. By default the Concat Node calls the new column concatenated, and might be helpful for early adopters to remember what happened in this column. However our boss asked us to change headers and concat data in their CSV data.

We use the CSV Input Node. Although the Column Renamer Node may help, we wanted to consolidate your effort here incase renaming the header is ideal.

The Concat Node has 4 different settings, and help end users clean up their concatenation efforts. You may or may not want original columns and this is an important element to consider.

Thanks for learning more about ET1’s Concat Node

We appreciate you using ET1’s Concat Node, and know if you have any questions… Please contact us.

We would love to hear about your solutions.

Return to ET1 Overview to learn more.

ET1 Find/Replace Node

ET1 Find/Replace Node

Automatically finding and replacing data is possible using the Find/Replace Node!

Find and replace works inside of sentences, words, numbers, and anywhere in the data.

Similar to “find all” and “replace all,” in your common Word Document software, ET1 offers the same but in a repeatable and consistent data app.

Using Find/Replace Node

Finding data you want to edit and replacing this automatically can be a lot of work, and with ET1’s Find/Replace Node, it’s easy to repeat.

  • Send data downstream to ET1’s Find/Replace Node
  • Select a column
  • Type what to find
  • Type what to replace
  • Determine if Case sensitive matters
  • Decide if you’re going to use regex (advanced but worth the effort)
  • Match the entire cell to consider it replaceable

Quick example of Find/Replace Node

In this example we have city/state data from our CSV Input Node, this data can be found in our Github CSV Node overview.

This data is City, and Short State.

Perhaps we wanted to swap MA short state with Meow to amuse our boss for 1 second.

Thanks for learning more about ET1’s Find/Replace Node

We appreciate you using ET1’s Find/Replace Node, and know if you have any questions… Please contact us.

We would love to hear about your solutions.

Return to ET1 Overview to learn more.

ET1 Manual Table Node

ET1 Manual Table Node

Create a table manually using the Manual Table Node. Manual Table node falls under the data input node category.

Built to help you create small tables that you need to use in your data pipelines.

When you need a thin layer of data, this is a great tool for manually synthesizing your data which happens to occur regularly while creating ETL processes.

We like to think of the Manual Table Node as a building node, for storing important variables, or simply creating data from scratch without requiring a file or data pipeline established.

Using the Manual Table Node

Using the Manual Table Node is straight forward in ET1.

  1. type in headers for column 1 and/or column 2
  2. begin creating the first row of data
  3. add more rows or delete rows

Thanks for learning more about ET1’s Manual Table Node

We appreciate you using ET1’s Manual Table Node, and know if you have any questions… Please contact us.

We would love to hear about your solutions.

Return to ET1 Overview to learn more.

(more to come here, like Add column however we want to do this right without adding more)

ET1 Unique Filter Node

ET1 Unique Filter Node

The Unique Filter Node or Unique Tool finds unique values per row in your data pipelines, or allows people to quickly review duplicates only.

Plus, you can select what column(s) to find unique values within. This enables people to easily understand what is inside of a column.

Duplicate rows happen, The Unique Filter node manages these rows for you automatically.

Whether you’re eager to only look at unique rows or drilling into the duplicates, ET1’s Unique Filter Node is the data engineering tool for your unique or duplicated needs.

Unique Filter Node: Filter mode explained

The Unique Tool or Unique Filter Node

  • Show unique only – this setting means you will stream only the unique values through the pipeline
    • You may want to run this across all pipelines as a way to verify
    • This is an easy way to create look up tables
    • Build a tool to understand what is inside of a column
  • Show duplicate only – will stream duplicates only and remove the unique values found
    • Drill into duplicates only, great for deep dives and researchers
    • Helpful for auditing pipelines, does your pipeline have duplicates?

Using the Unique Filter Node in ET1

Drag and drop your data pipeline arrow connection to the input of the Unique Filter to begin immediately reporting on unique rows only.

Open the settings for more granular options.

ET1’s Unique Filter Node automatically removes duplicate rows based on selected columns, however we automatically infer you are eager to use all columns and start there. Opening the settings for more options will offer a cool way to group data.

Creating look up tables with Unique Filter Node

Auditing your column? How about the values inside of each column? This is a great tool for understanding what is possible in your data pipeline.

The Unique Tool facilitates a comprehensive understanding of individual column content. A common strategy involves removing unnecessary columns and employing the Unique Filter Node to extract the distinct values within the remaining table, thereby enabling the surfacing of valuable insights.

ET1 is designed to facilitate straightforward data filtering and transformation processes. It is helpful to consider data analysis as a communicative exchange with the dataset.

Technical specs on the Unique Tool’s Data Processing

JavaScript that filters data rows for uniqueness or duplication based on specified columns.

It processes tabular data in a browser-based ETL pipeline, determining which rows are unique or duplicate by constructing composite keys from selected column values. The behavior depends on the filterMode configuration: when set to 'unique', it retains only the first occurrence of each key; when set to 'duplicates', it excludes first occurrences and keeps only subsequent repeats.

  • Composite keys use a rare delimiter ('␟'): The character U+241F (Symbol for Unit Separator) is used to join column values into a single key string. This prevents collisions that could occur with common delimiters like commas or pipes, especially when column values themselves contain such characters.
  • Robust handling of missing or invalid configurations: If node.columns is not an array or contains invalid column names, the function defaults to using all available headers, ensuring that filtering still occurs meaningfully instead of failing silently or throwing errors.
  • Two-pass algorithm ensures correctness: The first pass counts all key occurrences, which could be used for analytics (though currently unused); the second pass performs the actual filtering. This structure allows future enhancements, such as filtering by occurrence count thresholds.

Return to ET1 Overview to learn more.