The more I use HashiCorp's Terraform, the more I get convinced this is a half-baked product with more resources dedicated to developing the business model rather than the actual product.
Connecting to an external resource, like my Godaddy DNS records, turned out to be impossible. If I try to update just one IP for my app, I would lose all my other unrelated DNS records. It's all or nothing for Terraform: either you manage all your records or you don't. Import doesn't work, even when advertised, and I tried two providers for this, the second one supposedly adding import capabilities. It didn't work.
I had to use a null_resource
and an external shell script that issues HTTP requests against the GoDaddy API to update my IPs. And Terraform can't query the state of this resource of course, so this is just half a solution.
Today, I am trying to get it to track my provisioning (done with Ansible) using a null_resource
and md5 checks of local files (using data local_file
). If the state is clean, terraform apply
might or might not work. After hanging for a while on some operation, it gets killed by the OS.
If the first time works, the second time terraform apply
is run, it gets killed for sure. Everything comes to ashes and I sometime can't even terraform destroy
the damn thing again. The whole thing feels strange and random.
Anyway… this is just a rant, sorry. I might go back to my way of calling Ansible after Terraform is done with creating the droplets only but… what's the point? I could get Ansible to do the whole thing by itself.
The idea was to tie everything together in a way that I could just call Terraform and have my infra recreated every time, only if needed, but today, Terraform is failing me at this.
Top comments (0)