Hey, On 01.03.2017 22:23, Jimmy Berry wrote:
On Wednesday, March 1, 2017 5:44:58 PM CST Henne Vogelsang wrote:
If you need to record some extra time series data for your staging workflow engine you can do that, as your engine always runs in the context of the OBS instance it's mounted on top of. So it will also have access to the influxdb instance etc.
Same is BTW true for access to the SQL database, your engine has the same access as the Rails app it's mounted from.
As I would expect. I was looking for access to develop against since it is difficult to recreate an accurate facsimile of the OBS instance and near impossible to simulate the variety of workflows through which requests have gone.
I very much doubt that. We have an extensive test suite that is already 'simulating' all major workflows, including requests of the various kinds. For creating data you can use the tooling that exists, like our data factories[1]. If you need help with this do not hesitate to contact me :-)
It would also be good to see if pulling certain metrics directly from the source tables is performant enough.
Aren't you getting ahead of yourself? Why don't you first figure out what you want to do and how and then worry about performance of the production DB :-)
When I worked on the tooling used by the development site for other open source projects it was possible to get a sanitized database dump or staging environment that had access to both a clone of production and read access to production. These resources were invaluable for validating data migrations and tools before deployment.
This is a good practice that we also follow. But what has this to do with your tool? You are neither migrating nor deploying...
Without such access it was impossible to predict all the ways in which data can be either inconsistent, corrupted, or odd edge- cases.
Again you are getting ahead of yourself I think. We have a very well documented data structure. If something is inconsistent, corrupted or an odd edge case it is by our definition broken. If you come across such a case you should tell us or better yet fix that case :-)
Given that storing additional information will not cover all the desired metrics it is likely more effective to just record timeseries data. I'll have to look at the tool in question, but I would expect a background job to run that periodically writes a record to the timeseries database.
No, the contrary. Every time something happens a data point get's recorded into a data set in the time series DB. So let's say a request is closed. You would record the fact, the time, add some tags describing the resolution (accepted, decline) or the user who did this etc. Once you have this data in the time series DB you can query and display it :-)
On that note, are the various influx software pieces setup and hosted or has nothing been done short of selecting the desired tool?
No nothing is done yet. Just planed, sorry. Henne [1] https://github.com/openSUSE/open-build-service/tree/master/src/api/spec/fact... -- Henne Vogelsang http://www.opensuse.org Everybody has a plan, until they get hit. - Mike Tyson -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org