You are on page 1of 7

Iniciar sesión Únete ahora

Collecting Usage Statistics of


Dynamo / Revit Scripts Into a MySQL
Database (With a Guide)
Regimantas Ramanauskas Follow
BIM Manager / Dynamo and Revi… 107 11 6

In my previous article, I have tried to convey some thoughts the value of Dynamo scripts for
consultancies as well as benchmarking those scripts against metrics. What I did not elaborate
as much was how the data for these metrics can be collected in the first place. Which is what
this article is about, or more closely part discussion part tutorial in the second half.
First, a reminder, why do we need to collect data. When trying to measure the impact that the
developed Dynamo scripts or workflow, in general, has on the office, other engineers, architects,
we cannot rely on everyone to constantly keep track of their Dynamo usage or personal
feedback only. A requirement for users to log their experience with Dynamo scripts every single
time would become a bottleneck in the design process which is exactly what we are trying to
overcome by implementing computational BIM design through software like Dynamo and
Grasshopper. Exact metrics are necessary to overcome potential biases when collecting
Iniciar sesión Únete ahora
personal feedback. A user may prefer certain scripts that can increase his efficiency but maybe
those Dynamo scripts aren't used by others that much and do not represent the direction of the
workflow being developed for the company to maximize the overall company efficiency. In
addition, reliably estimating the value generated by a certain Dynamo script will revolve around
measuring the time saved as compared to a manual way, and for that an exact count of the
times the developed Dynamo script was run is crucial. More on this subject and the way to
benchmark the value can be found in my previous LinkedIn article.
Returning to the topic, as I have slightly drifted away, the best solution to the above issues is an
automatic way to process usage collection and there are multiple ways data collection can be
carried out, integrated into Dynamo scripts. I will focus on two approaches here:
Collecting data into text logs on individual machines;
Collecting data into MySQL databases.
I will try to explore both options and present ways to implement this, with mini-tutorials for
Dynamo and Revit.

Recording Dynamo Usage Statistics Into Text


Logs
This approach is the most simple to implement. But it does have its drawbacks, depending on
the number of users or more precisely, depending on the likelihood of two simultaneous users
using Dynamo scripts at the same instance in time. While it might not seem likely, in larger
offices, working over larger networks, due to latency and over delays such as disk writes, etc. the
time window to access, write information and close the log file can actually become sufficient
enough to pose problems. Ultimately, gathering data into text logs can be a good option if:
The number of users will be relatively small to medium and chance of concurrent access to the
text log file, that is stored somewhere centrally (e.g. company network storage) is relatively
small;
The number of users will be medium to large and storing statistics locally in text logs on the
user's computers is not a problem or is desired. Alternatively, if a separate database is not an
option;
A single user will be using Dynamo and Revit. Thus only personal Dynamo script statistics are to
be collected locally.
However, if possible, for future potential and data usability, a dedicated database would be the
best option.
Implementing this in Dynamo is quite straightforward as no custom nodes or libraries are
required, only those available by default + some Python, depending on what sort of data you
want to collect it can be replaced. For the following instructions, I have used Dynamo 2 and Revit
2019. I will assume that the user has some knowledge of Dynamo and will not go into too much
detail.
One important decision that must be made before doing anything concerns the information to
Iniciar sesión Únete ahora
be collected. It might seem simple enough, but it needs to be thought of well in advance.
In the end, depending on how much you intend to future proof your data, you will want to have
some information identifying the script, the user, the project on which the Dynamo script was
used, when it was used. This information can tell you about which scripts get used and how
much but additionally collecting the input variables with which the scripts were run could provide
further benefits for troubleshooting or potentially applying some Neural Networks if the data is
good enough.
In this case, I have settled for the following data: Dynamo script id, given manually, script file
name, script version, also given manually, user, users computer name, project file name, date
and time.
The nodes are placed below the main script, they do not have any connections with the main
part of the script, but this might vary if we would be collecting input data in this case.

In total, there are only 8 nodes, including 1 Python node. The first step is to specify the location
of the text log file. It is best to have it in a centralized location maybe on network storage but
could be stored individually for each user on their machine. But the file path must be carefully
provided, regardless. This script does not create an actual text file, so an empty file should be
created before specifying it as a string in Dynamo. 2) I have included manually specified script
identification and script version parameters, that are passed to the 3) Python script that collects
other information (the code is shown further down the article). 4) The retrieved data are
converted to strings and then joined together into a single line in 6). To separate the values, later
on, a delimiter is specified in 5). It can be changed to whatever you want. 7) As the whole data is
now formatted into a single string, we would need a simple way to insert it into new lines but not
the end of the text log, for this we add a newline character specified by \n to the end of the
string. Finally, at 8) we tell Dynamo to write that string to the file we specified at the start.

The Python script does most of the information gathering, one reason to use it is to reduce the
number of Dynamo nodes within the script. The other reason - some data cannot be obtained
with default Dynamo nodes or relying on additional Dynamo packages. The code is presented
below with comments provided after # symbol for those not familiar with the syntax. I am sure
someone will ask for the code in text so they can just copy it. Consider typing this to be your
homework if you intend to implement this, hence I will not provide the option to copy paste it.
One additional comment that I wanted to mention, the script version and script id that are
connected to this Python node, appear only in the output variable, as the data is just passed
through. The question you should ask, why not include it directly in the code? Well, this would
mean that for each script the Python code would have to be edited, having some variables
outside can be simpler. One more remark: if the collection will be very widespread, on dozens or
more scripts, it would be best to maintain the Python codes inside custom nodes. That way,
future changes can be somewhat centralized, as only the custom node would need to be
Iniciar sesión Únete ahora
changed and not each script individually.

Finally, the outcome after 2 quick test runs is the following text log (I have pixelated some semi-
sensitive information). As you can see, the data is presented in a relatively clean fashion. The
order of data can be changed in line 42 of the Python script. It is really up to preference.

Recording Dynamo Usage Statistics Into a


MySQL Database
When it comes to MySQL or actually, any SQL databases, the first thing benefit that comes to
mind is centralized storage and simultaneous access to data. Besides those, the data inside
such a database can be processed later way easier than many different text log files or one large
central text log file. The data can be directly linked to some business intelligence software like
Power BI to provide live, easily understandable feedback.
The following example assumes that the user has some knowledge of MySQL, Dynamo and
Python. In addition, there are some prerequisites to this approach. I used MySQL 8.0, Dynamo 2
and Revit 2019. I created an empty database for this guide. I purposely did not set up any tables
manually so we can generate with the Python script shown a little bit further in the article.
Normally you would want to do it once, manually, as opposed to having the redundant code
inside a Dynamo script that gets executed every time. The remaining major component that is
required is a MySQL Connector/NET8.0 to interface with the database from Dynamo/Python.
The benefit of the NET connector is that it contains everything needed inside a single
MySql.Data.dll library, meaning that it could be copied to a remote central server/storage. That
way, instead of having to install the connector on each users machine, the library can be linked
to from the Python script. After you are done, you are free to keep or remove the connector
installation from your machine.
If you click on the MySQL Connector/NET8.0 above, it will take you to the download page, from
there you can get the setup file, install it on your machine and afterwards just go to the
installation folder in your program files directory and look for the file named MySql.Data.dll.
Now, this method is not the only way of exporting data from Dynamo to a MySQL database,
alternatives include custom nodes from available packages or even using different connector s
for MySQL. But if your office is running MS Windows, which it likely does, I find this solution the
easiest to implement centrally, as you have everything you need inside a single library. A
reminder, Dynamo uses IronPython as opposed to Python, hence we are somewhat limited
compared to the latest versions of Python.
Very few Dynamo nodes are actually needed for this, as we mostly rely on Python blocks. We
use essentially the same data as with the text logs, the same script as well, to illustrate that
there are only 4 nodes in total this time. The last node, to the right updates the database table
with the status of the script. To check whether it has completed successfully or not.
Iniciar sesión Únete ahora
As you can see below, I have switched up the id and version slightly. These variables get passed
to the same Python script, as shown above, to keep things simple. Now the rest of the
information gets passed to a Python node which actually does the MySQL connection and
queries. The script then sends its output, the primary key number, to the last Python node, that
updates the status of the script.

The inside of the Python script can look intimidating, but it really isn't. I have commented on all
the key lines of code. The principles are simple, you have to initially link the folder that contains
the MySQL.Data.dll library. Then it comes down to defining the MySQL queries, which is where I
don't go into detail, as I assume you will have some knowledge of the statement syntax. The idea
is to provide those queries as strings or multiline strings, hence the triple " symbols. Now we
open up the connection to the database with the specified connection data and essentially
execute the queries against the database. The last bit of code retrieves the primary key, a unique
identifier of the data row inserted into the table. I have defined an additional condition to retrieve
the primary key where the time is the same as the one we inserted.

The last Python code block handles the updating of the table entry after the script completes or
fails to complete. It will take the final value from the final Dynamo node of the main part of the
functional script and check its output. Most cases, if it fails it will output "null". But in other
cases, when let's say some node is not properly connected and can't function, it will output
"Function" when previewed in Dynamo. However, it is actually expressed as
"ProtoCore.DSASM.StackValue". Therefore I used a simple conditional check if the output value
is "null" or "ProtoCore.DSASM.StackValue", the script doesn't execute the MySQL part and only
outputs a simple status of the node itself. If the output is anything but those values, it will
execute the MySQL query to update the table.

The stats get updated regardless if the Dynamo script is run through Dynamo itself or Dynamo
Player. Normally, this will not delay the script execution by much, maybe a second or two,
depending on the number of data you are collecting of course and the speed of your server
where the database resides.
The final outcome, when viewed in MySQL Workbench would look something like shown below. I
quickly ran the script three times, failing the script the first time on purpose, so you can see the 0
highlighted in yellow. That column represents the status of the script, if it completed properly, it
will be shown as 1, otherwise, it will be 0. Which can be beneficial for troubleshooting when
more information like inputs are collected as well.

Final Thoughts
One should keep in mind, that it is impossible to include every detail into the collected data and
that regular feedback sessions should be held. A user can still provide some insight and give you
information that you might not have thought of collecting automatically.
Iniciar sesión Únete ahora
If your company is looking seriously at implementing visual programming based workflows, or
any computational BIM design solutions, data collection and statistics for estimating actual
value created will become unavoidable at some point. Implementing usage tracking inside
Dynamo / Grasshopper scripts can provide the insight on which scripts pay off for the
consultancy and which did not perform as expected. Such knowledge can help guide the
process over time. In addition, integrating data collection inside every Dynamo script can enable
more exotic solutions in the future, such as the application of Machine Learning, provided
sensible data is collected. Opportunities are plentiful when you have enough resources, and in
this digital age, the key resource is data. Collect it wisely, use it intelligently, guard it firmly.

Regimantas Ramanauskas
Regimantas
BIM Manager / Dynamo and Revi… Follow
Ramanauskas

11 comments

Sign in to leave your comment

Brian Brian Kjær


1h
Kjær Fagchef BIM/IKT hos ÅF

Claus Andersen
Recomendar Responder

German Rojas
German BIM Manager en Constructora Colpatria | Liderando la Transformación Digital de la Construcción desde12 h
Rojas BIM

Fantastic guide Regimantas Ramanauskas Thanks! ***If you don't mind, I have a couple
of simple questions... If the database is host in the Cloud, would the same workflow
work? (Changing the connection settings of course)... And... Could be the dll of MySQL
hosted in the Cloud too?
Recomendar Responder

Weng Yuen Kam (Kam) 13 h


Weng Senior Associate - Structural Engineering at Beca | Seismic Specialist | making everyday better
Yuen
Kam Ben Westeneng Peter Mai Kaiwaan Tata James Koay
(Kam) Recomendar Responder
Gert Gert van Aswegen MAIB Iniciar sesión Únete ahora
13 h
van National Digital Engineering Manager at Fulton Hogan
Aswegen
Adam Walmsley
MAIB
Recomendar Responder

Adam Lamping 13 h
Adam Global Lead Digital Asset Lifecycle at Arcadis
Lamping
Partha Sarkar Andrew Victory
Recomendar Responder 1 recomendación

Vytautas Tamulėnas 16 h
VytautasBIM Manager | Revit & Dynamo Specialist
Tamulėnas
Superb! Such a clear way of justifying the investment in scripts!
Recomendar Responder

Show more comments.

Más de Regimantas Ramanauskas 5 artículos


Measuring Value of Managing and Analysing BIM How to Get Sta
Dynamo/Grasshopper Automation Data With Revit + Dynamo + Power + Revit and What
Solutions for BIM Modelling BI
How to Get St
Measuring Value of Managing and Analysing Dynamo + Re
Dynamo/Grasshopper… BIM Data With Revit +… 2018 M09 14
2019 M02 12 2018 M11 11

You might also like