I have a problem, and they say the first stage to fixing any problem is to admit you have one, well I here goes
I'm addicted to Making Flows
So how did I get to this realisation, well that was one Saturday night when I was making a flow to clone a Solution, why is that so bad you fellow Power Automate Developers may ask, well...
- Im probably only going to use this flow once and it would be quicker just to copy each componnent
- Its Saturday NIGHT!
But guess what, I got it working and its so cool, let me show you 😎
For anyone just wanted to clone a solution, the flow is here, (named Platform Tools) for you to download, only thing to know is it only works with Flows, Conenction References and Environment Variables, it doesnt copy the variable value, and you need to own the connections of the connection references.
So here is the problem, I have support flows running after my pipelines have ran, but because im using different spns for different environment stacks I cant reuse the same flows across different environments (if only there was a way to dynamically flip connections, Microsoft feel free to go down that rabbit hole). I had 2 choices, move each solution to its own environment, which caused more issues, or clone the entire solution.
Now heres where problem 2 strikes, the thought of copying 20+ flows and connection references and Environment Variables, then editing every flow to change said references and variables, did not sound like fun. It sounded long and boring, and I was bound to miss one somewhere and cause lots of debugging pain. Now here comes problem 1, I like making flows, so I could spend time doing something I hate, or gamble on finding a way to automate it and build a flow. This so goes against KISS (Keep it Simple Stupid), but hey maybe I'm a little of both already.
The flow is going to work like this:
And this made me realise we have 4 main areas:
- Create Solutions / Components
- List Solution Contents
- Add New Components to New Solution
- Update Flow Contents
And the last one was the one that got me, as I didnt realise how important it was.
1. Create Solutions / Components
At the basic level solutions and components (flows, connection references, environment variables) are just rows/records in a Dataverse table. So that means all we need to do to create them is add a row, and luckily there is a action for that "Add a new row to selected environment".
To make mine fully dynamic I have set environment to an input, this means no schema for inputs, you have to build your own JSON. If you want to make it easier to edit just select your environment to see all the inputs.
For the solution body I created a JSON object like below:
I used the orgional solutions name and added a prefix to differeniate it (this is a input of the flow). I also used the same publisher, which means I run this after I have found the solution to copy:
Then I reat the pattern with each component, in my case just 3 type, Flow (process/workflow), Connection Reference, Environment Variable Definition. Note I don't copy the Environment Variable Value, as that is most likely the value you will change and it adds complexity in ordering the creation, so I skipped it 😎 ).
Environment Variable Definition
FYI this is what the add row looks like with the schema shown by selecting an environment
Flow
There are lots of parameters/inputs for flows, I have done just a few that seem to work for me, your mileage may vary, but adding more is simple enough.
2. List Solution Contents
So we can create solutions and its components, but how do we know what to make, and this is where our solution components table comes in.
Because components can be in multiple solutions the relationship can not be direct. So each component is added to the solution component table with a relationship to the solution, that way a component can be in the table multiple times.
This means all we need to do is query the component table with our solution id.
The only complexity we have is we start we our solution name and we need our solution id. This means we need to query the solution table first and pass the first value into out component list.
Component query
_solutionid_value eq 'outputs('List_rows_from_selected_environment_solution')?['body/value'][0]?['solutionid']'
3. Add New Components to New Solution
In theory all we need to do to add the new components to the new solution is to add a row to the solution components table. And this is were theory sucks, as I kept getting errors.
And I realised this is because of how the relationship is formed, and that made me think, how does the Power Platform do it, and after a little looking I found there was an api for that 😎.
Dataverse API's can be found in the Unbound action, and there in the list was the AddSolutionComponent action.
With a non dynamic environment you get the schema, but its simple enough even without it:
{
"ComponentId": "{id from row created}",
"ComponentType": {type, same as used in switch},
"SolutionUniqueName": "{uniquename from new solution}",
"AddRequiredComponents": false,
"DoNotIncludeSubcomponents": false
}
So we just add this action after every create, ending up with this:
4. Update Flow Contents
The eagled eyed may have spotted above the Append to arrat aEnvironmentVariables, and that is because we need to update the flow.
This was frustrating to get my head around but pretty simple after, in a nut shell we don't want our cloned flows to still be using the orgional Connection References and Environment Variables.
The solution was 2 fold, first we have to process the Connection References and Environment variables first, then the flows after. The references and variables are stored in an array so that we can loop over them.
They both follow similar structure:
Environment Variables
{
"old": {
"displayname": "@{outputs('Get_a_row_by_ID_from_selected_environment_Environment_Def')?['body/displayname']}",
"schemaname": "@{outputs('Get_a_row_by_ID_from_selected_environment_Environment_Def')?['body/schemaname']}"
},
"new": {
"displayname": "@{outputs('Add_a_new_row_to_selected_environment_Environment_Def')?['body/displayname']}",
"schemaname": "@{outputs('Add_a_new_row_to_selected_environment_Environment_Def')?['body/schemaname']}"
}
}
Connection References
{
"old": {
"connectionreferencelogicalname": "@{outputs('Get_a_row_by_ID_from_selected_environment_Connection')?['body/connectionreferencelogicalname']}",
"connectionid": "@{outputs('Get_a_row_by_ID_from_selected_environment_Connection')?['body/connectionid']}"
},
"new": {
"connectionreferencelogicalname": "@{outputs('Add_a_new_row_to_selected_environment_Connection')?['body/connectionreferencelogicalname']}",
"connectionid": "@{outputs('Add_a_new_row_to_selected_environment_Connection')?['body/connectionid']}"
}
}
Once we have our arrays the plan is simple, we are going to loop over the flow definition and replace the old with the new.
The flow definition in the workflow table is actually called 'clientdata' (no idea why). And within the definition we are looking for the Connection References and Environment Variables.
The variables are easy as they are always:
@parameters('{displayName} ({schemaname)')"
Both can be found from the add row, we use the below replace expression to replace them:
replace(
replace(variables('sFlowDefinition'),
items('Apply_to_each_Environment_Var')?['old/displayname']
,
items('Apply_to_each_Environment_Var')?['new/displayname']
)
,
items('Apply_to_each_Environment_Var')?['old/schemaname']
,
items('Apply_to_each_Environment_Var')?['new/schemaname']
)
Connection References are similiar but a little more complex to figure out. They are in 2 places, in the connectionreferences object and in each action that uses the connection.
You can see above the object, with have the 'connectionreferencelogicalname', which is in also found in the add row. As you can see this connection is then linked to a reference in the definition, in this case: shared_sharepointonline_2 & shared_office365_1, which are then both referenced throughout the flow.
So that means we don't need to update all of the actions, win 😎 As a curve ball I spotted in some old flow schemas the connection also had the connection id, so just to be safe I replaced that to, as its in the add row as well.
And that's it, there are a few filters and switches but the flow is done (there are also 2 connection references as some older flows use type 10039 instead of 10078). Its great that at the heart of the Power Platform everything is Dataverse tables and API's all of which can be used from within a flow.
Sometimes building a flow is overkill, but it will always be fun 😎
Top comments (1)
Hey I get an error when uploading PlatformTools_1_0_0_2.zip into the flow: Something went wrong. Please try again later.
What am I doing wrong?