We are excited to announce the GitHub OAuth Login page is now active in ET1.1 (Neon).
Picture a world where your most sensitive data files and database tables are protected with the same robust security protocols and layers you’d expect from a leading, Fortune 500 corporation.
Without having to setup data security; login, authorization, storage, persistence, managing login information and keeping data compliance people happy.
In ET1.1, we encourage end users to begin translating important data to solutions that can become a single source of truth around their team, organization, or across the sea.
Authorization isn’t just a checkbox on our to-do list
Authorization, it’s our top priority, and that’s why we went with GitHub’s OAuth!
Our commitment with developing solutions is to cultivate and maintain an exceptionally secure environment, safeguarding both our users and the valuable data they entrust to us.
With this completed we are actively seeking people to participate in beta testing.
ET1 is using Auth2 through GitHub, providing two-factor authentication to end-users of ET1.
FYI; Using dozens of SaaS products during analytics consulting engagements is our skill, building open source solutions from scratch is our specialty, and…
At DEV3LOPCOM, LLC, we have a decade+ of experience beta-testing and deploying numerous authentication tools in advanced production settings (enterprise and GOV). With this solution, I have developed a unique perspective on streamlined login/security and optimized our ET1’s login process.
ps… If you need assistance with Auth2 in your software, let me know
– Tyler Garrett, Founder tyler@dev3lop.com..
About Authorization
Software authorization is the process of determining what an authenticated user is permitted to do within a software system, such as accessing specific files, performing actions, or using certain features. It is crucial for ensuring security by preventing unauthorized access to sensitive data and functionality, and for maintaining operational control by managing user permissions, tracking usage, and enabling personalized user experiences that enhance customer satisfaction and loyalty.
About GitHubs OAuth (Auth2)
We did not create auth2, rather we are utilizing auth2! Think of Auth2 like a way to keep your sensitive information safe and secure, we don’t store your password.
This helps you login fast, seamlessly, if you’re already logged into GitHub in another tab – it’s a 1 click login – and this process keeps your solution/data safe.
What is Auth2?
Auth2 is a protective layer we use to help use generate data security per user.
Auth2 is an industry standard when it comes to managing data privacy and security.
People are going to want to add their data into ET1, different kinds of data, and we are working hard to ensure this is not only possible but also the safest it can be. We will not need to recreate the wheel, but instead utilize vetted systems. This enables us to offer hippa compliant storage, soc2 compliant storage, and way more… Without over complicating the UX.
With ET1, we are going a similar direction as our cronjob software Canopys Task Scheduler, and using different under hood tech. Excited to share, founder of dev3lop, Tyler Garrett, created the entire authorization flow. The Login to ET1 with GitHub Authentication is now live for beta testing!
Setting up 2 way auth is VIA GitHub is easier than you think so here’s a link to GitHub Auth Documentation and more importantly it unlocks Google emails. Plus, we like the idea of advocating for a software we love to use.
If you don’t have a GitHub account, please create one
If you have a github account, login
We only request your ’email’, we don’t save your password.
Github Auth and User Data Explained
Here’s a simple, privacy‑first explainer of the users table, plus a short technical brief of “GitHub Auth” for the curious.
Users table (what each column means) Source:
database-schema.sql → table users
id (UUID)
A random unique tag the system gives you when you first show up.
It’s not your email or username.
Think of it like a sticker on a package so it can be tracked without opening it. 🔒
external_id (text)
A number/string that represents you from GitHub.
It’s how the system knows “the same person” across sessions without storing a password here.
Marked unique, so there’s only one of you. ✅
email (text, optional)
Used for “who’s signed in” display or contact. ✉️
created_at (timestamp)
When your record was first created.
Like a “birthday” for your account in this system. 🗓️
updated_at (timestamp)
When anything about your record last changed.
Auto‑updated so there’s an honest trail. 🕒
Why this is privacy‑friendly
Minimal data: just an internal id, a GitHub id, and optionally an email.
No passwords stored here. Authentication happens through GitHub, so this table isn’t a vault of secrets.
Clear timestamps help you audit “when did this get created/updated?”
Short technical brief: GitHub Direct (for the nerds)
Flow: Browser requests GitHub OAuth → user consents at GitHub → GitHub redirects back with a code → the backend exchanges code for tokens → identity is read (e.g., user id, email) → app creates/updates users using external_id as the stable key.
Why we use “Direct” conversations with GitHub? It talks straight to GitHub’s OAuth endpoints, avoiding extra middle SaaS style layers that can fail, require passwords, and become a single point of failure.
Data model: external_id is the anchor. Email is optional data that may change; external_id won’t.
Security posture: No passwords stored locally. Use HTTPS, strict redirect URIs, and least scopes (often user:email). Tokens should be short‑lived and stored server‑side with care.
To keep your account secure, you must authenticate before you can access certain resources. When you authenticate, you supply or confirm credentials that are unique to you to prove that you are exactly who you declare to be.*
TL;DR tech stuff
You get a private internal id, a “GitHub id”
so the system recognizes you, and an email.
No password stored here, that would not be safe!
GitHub Auth = clean OAuth (Auth2) with GitHub as the identity source.
Plus GitHub includes a simple Google Login which is very secure and our preference.
Our history with authorization
Previous software we released, Canopys Task Scheduler, a desktop task schedule for both windows and mac, was a good demonstration of our knowledge with desktop development, creating executable files, and authorization embedded into an electronjs desktop only application. Building native web applications on desktop, using Auth0s authorization, a local SQLite database, not an easy task, however with NodeJS and electronjs we think life is much easier.
Learn more about ET1, contact us for any questions, demo, and access to beta.
The Neon Input Node is our first managed database access node, and an intuitive approach to giving people access to a serverless PostgreSQL which users are able to manage in Neon Lake.
Data here is safe, and protected by 0Auth (Two way authentication).
As you use ET1.1 csv files, json, and manual tables, this data will automatically flow to the Neon Lake. Making it available, immediately, in the Neon Input Node.
Think of the Neon Input Node as a way to collaborate easier across data development, and a way to begin bringing together flat files lost in the organization – in one node.
The Neon Input lets you access the Neon Lake from any Workflow.
Neon Input Node, enables teams to work together on the same source of truth and helps you to work across workflows with the same data source.
ET1 Neon Input Node requirements
ET1 Neon Input Node is not available in the Bronze Edition of ET1.
Bring your columns together as one with the Concat Node in ET1.
This node is similar to concat() in Excel and allows you to easily bring more than 1 column together in your data pipeline, and also it gives you the ability to add the delimiter. The opposite of the Concat Node is the Split Node.
How to use Concat Node in ET1
Simply send data to the Concat Node and start setting up your node.
choose columns
choose separator
output column name
keep original columns (yes or no)
In this example we have Location and Supplier column that needs to be put together and removed from our data pipeline. By default the Concat Node calls the new column concatenated, and might be helpful for early adopters to remember what happened in this column. However our boss asked us to change headers and concat data in their CSV data.
We use the CSV Input Node. Although the Column Renamer Node may help, we wanted to consolidate your effort here incase renaming the header is ideal.
The Concat Node has 4 different settings, and help end users clean up their concatenation efforts. You may or may not want original columns and this is an important element to consider.
Thanks for learning more about ET1’s Concat Node
We appreciate you using ET1’s Concat Node, and know if you have any questions… Please contact us.
Create a table manually using the Manual Table Node. Manual Table node falls under the data input node category.
Built to help you create small tables that you need to use in your data pipelines.
When you need a thin layer of data, this is a great tool for manually synthesizing your data which happens to occur regularly while creating ETL processes.
We like to think of the Manual Table Node as a building node, for storing important variables, or simply creating data from scratch without requiring a file or data pipeline established.
Using the Manual Table Node
Using the Manual Table Node is straight forward in ET1.
type in headers for column 1 and/or column 2
begin creating the first row of data
add more rows or delete rows
Thanks for learning more about ET1’s Manual Table Node
We appreciate you using ET1’s Manual Table Node, and know if you have any questions… Please contact us.