๐ช Teaser (for the impatients)
Do you have a repository that relies on csv
files... and want to operate:
- ๐ก๏ธ Protect your data with quality intergrity checks before to corrupt your data
- ๐ฌ Check data quality as part of your project lifecycle
- ๐ Get operational KPIs reporting
- โพ๏ธ Automate release process to explain your contributors what has been achieved
- ๐ฆ Deliver data
- ๐คฏ Endless usages
๐ซต Don't look further, this sort post will cover all these aspects with a practical and highly understandable workflow.
๐ฟ Demo
Enough talks, let's jump'in:
๐ฆ๐ป๐๐งโพ๏ธ๐ฆ opt-nc/setup-duckdb-action
opt-nc / setup-duckdb-action
๐ฆ Blazing Fast and highly customizable Github Action to setup a DuckDb runtime
โน๏ธ Setup Duckdb Action
This action installs duckdb
with the version provided in input.
๐ Inputs
version
Not Required The version you want to install. If no version defined, the latest version will be installed.
๐ Example usage
uses: opt-nc/setup-duckdb-action@v1.0.8
with:
version: v1.0.0
uses: opt-nc/setup-duckdb-action@v1.0.8
๐ Related resources
๐ญ Further ๐
Once CI (Continuous Integration) has been done... you can also think (without a lot of efforts) to Deliver that data to third party services, as part of your DEVOPS pipeline.
A this point I see two easy options:
- Upload to minio
S3 API
w/HTTPS
- Use MotherDuck
... which make your data available for new usecases, at no additional effort:
Top comments (7)
A very cool place to watch:
doc(article) : Add demo on usecase #46