I previously reported a bug about "Cinder configuration (cinder.conf) is not automatically setup for Ceilometer during Devstack install"
https://bugs.launchpad.net/ceilometer/+bug/1210269
The bug was reported in Ceilometer which was the wrong place, so it was marked invalid. The bug report should have been filed in Devstack.
I wanted to know what's the best way to submit a patch that depends on another patch. Would I follow this?
https://wiki.openstack.org/wiki/Gerrit_Workflow#Add_dependency
jpich said:
following the guide is good. -R matters. Make sure the change id for the dependent commit is the same as on gerrit, so that you don't update it accidentally
I was wondering if it is okay to use assertEqual() on a float. For example, self.assertEqual(avg, 4) where avg is a float. I was worried about floating
point errors.
flwang said:
try http://docs.python.org/2/library/unittest.html#unittest.TestCase.assertAlmostEqual
you can try assertAlmostEqual
jd thinks this isn't an issue if you've set the value of your float to being
an integer. So if you set avg = 4 and then run self.assertEqual(avg, 4),
that should work fine.
- group by blueprint on Launchpad: https://blueprints.launchpad.net/ceilometer/+spec/api-group-by
- group by blueprint discussion on the OpenStack wiki: https://wiki.openstack.org/wiki/Ceilometer/blueprints/api-group-by
Currently, I check each element of
results = list(self.conn.get_meter_statistics(f, groupby=['user_id']))
So my test grabs each element like
r1 = results[0]
then checks the attributes of each element like
self.assertEqual(r1.groupby, {'user_id': 'user-1'})
self.assertEqual(r1.count, 2)
self.assertEqual(r1.unit, 's')
self.assertEqual(r1.min, 2)
self.assertEqual(r1.max, 2)
self.assertEqual(r1.sum, 4)
self.assertEqual(r1.avg, 2)
However, this assumes that the elements are in a certain order, e.g. the first
element has groupby equal to {'user_id': 'user-1'}.
So I rewrote the code so that you don't check specific elements of results.
I first use a list comprehension to grab all the groupby attributes from
results
groupby_list = [r.groupby for r in results]
r.groupby is a dictionary. I use a nested list comprehension to grab the
keys/values of the dictionaries and flatten the resulting lists.
groupby_keys_set = set(x for sub_dict in groupby_list
for x in sub_dict.keys())
groupby_vals_set = set(x for sub_dict in groupby_list
for x in sub_dict.values())
Then I check that the keys and values are in the set of valid values.
self.assertEqual(groupby_keys_set, set(['resource_id']))
self.assertEqual(groupby_vals_set, set(['resource-1',
'resource-2',
'resource-3']))
This allows me to check that the values are correct without checking their order.
Now to check all the attributes of results, for each element of resutls, I check that the other attributes match the groupby` attribute for that element:
for r in results:
if r.groupby == {'resource_id': 'resource-1'}:
self.assertEqual(r.count, 3)
self.assertEqual(r.unit, 's')
self.assertEqual(r.min, 2)
self.assertEqual(r.max, 2)
self.assertEqual(r.sum, 6)
self.assertEqual(r.avg, 2) ...
and similar code for the other valid values of r.groupby.
jd made a few minor edits to address Doug Hellmann and Mike Perez's code review comments. He added some NotImplmentedError exceptions and refactored some code to avoid duplicate code.
Jenkins ran after Patch Set 6 and reported that the runs were UNSTABLE
jd reported the bug to OpenStack's Infrastructure team:
https://bugs.launchpad.net/openstack-ci/+bug/1212990
At 8:52 AM EST, fungi who works on infrastructure said this on #openstack-dev:
yes, we ran out of space for the jobs to upload their logs, so everything started going haywire. i added more space and it's working again now but stuff which failed earlier has to be rechecked or reverified so it re-runs
I wasn't clear about how group by works with the period parameter. There are
two possibilites
-
Sort the samples into groups and then in each group, compute the statistics by period
-
Sort the samples into periods and then in each period, compute the statistics by group
jd said:
we create the periods, and then for each period we request using GROUP BY
that's what the code does as it is written now
I had to change the timestamps of the samples in test_sample_data for the
prepare_data() method of the class StatisticsGroupByTest so that I had some
decent results for my period tests.
I wrote test_group_by_with_period() and
test_group_by_with_query_filter_and_period. Each element of
results = list(self.conn.get_meter_statistics(f,
period=7200,
groupby=['project_id']))
will have a distinct groupby and period_start attribute. So I check each
element of the results by checking that the other attributes have values that
correspond with that element's combination of groupby and period_start.
for r in results:
if (r.groupby == {'project_id': 'project-1'} and
r.period_start == datetime.datetime(2013, 8, 1, 10, 11))
self.assertEqual(r.count, 3)
self.assertEqual(r.unit, 's')
self.assertEqual(r.min, 1)
self.assertEqual(r.max, 4)
self.assertEqual(r.sum, 6)
self.assertEqual(r.avg, 2)
self.assertAlmostEqual(r.duration, 4260.0)
self.assertEqual(r.duration_start,
datetime.datetime(2013, 8, 1, 10, 11))
self.assertEqual(r.duration_end,
datetime.datetime(2013, 8, 1, 11, 22))
self.assertEqual(r.period, 7200)
self.assertEqual(r.period_end,
datetime.datetime(2013, 8, 1, 12, 11))
...
I used self.assertAlmostEqual() for r.duration because I was worried about
a floating point equality problem.
I ran $ tox -e py27 -- storage to make sure my new period tests run okay.
Since jd implemented groupby in the SQL driver, the SQL driver also ran against
the new period tests. Unfortunately, I found that when the period tests tried
to run the line
results = list(self.conn.get_meter_statistics(f, period=7200, groupby=['project_id']))
There was an error:
Traceback (most recent call last):
File "/opt/stack/ceilometer/tests/storage/base.py", line 1246, in test_group_by_with_period
groupby=['project_id']))
File "/opt/stack/ceilometer/ceilometer/storage/impl_sqlalchemy.py", line 518, in get_meter_statistics
r = q.all()[0]
IndexError: list index out of range
jd suggested:
Try to change:
res = self._make_stats_query(sample_filter, groupby).all()[0]
to: res = self._make_stats_query(sample_filter, None).all()[0]
in the SQL driver ceilometer/storage/impl_sqlalchemy.py, but the error stays
the same.
I found that the error goes away if I change the test data so that every period contains samples; apparently, the SQL driver can't deal with a period that has no samples in it.
jd revised the SQL driver so that it could pass my new group by with period storage tests.
I pointed out to jd a problem I didn't know how to resolve. For example, in
line 1321 in https://review.openstack.org/#/c/41597/7/tests/storage/base.py,
I wrote an else clause there catches any result that has the wrong groupby
and period_start. I didn't know what assert statement to use there, so I used
assert False. But I have doubts that is a good idea. jd said:
don't use assert, use self.assertXXXX and in this case the best is to use self.fail()
jd quickly submitted another patch set (Patch Set 9) to replace my
assert False, ... statements with self.fail(...)
Jay Pipes has some comments on the storage driver implementation.
-
Modify the prototype of all
get_meter_statistics()functions to accept the groupby parameter, raisingNotImplementedErrorin the first place.jd may have done most of this already.
-
Add tests to check group by in combination with the
periodparameter.- group by single field with period
- group by single metadata field with period
- group by single field with query filter and period
- group by single metadata field with query filter and period (this is the example in the wiki)
I did this in Patch Set 7.
-
Revise unit tests so that results from
get_meter_statistics()don't have to be in a specific order.I did this in Patch Set 5.
-
Start implementing MongoDB driver
The MongoDB driver implementation driver should go in a separate patch from the SQL driver implementation. The MongoDB patch should list the SQL driver patch as a dependency since the SQL patch hasn't been merged yet and the MongoDB patch needs the storage tests.