I discussed the issues below with jd and he suggested I file bug reports on them.
Currently, you need to manually alter the Cinder configuration file
/etc/cinder/cinder.conf , in order to enable auditing events for Ceilometer.
This allows Ceilometer to collect data from Cinder. However, it would be much better if this was taken care of automatically when you install Ceilometer with
the Devstack script stack.sh
Note: the manual change for the Cinder configuration file is just to add a line
notification_driver=cinder.openstack.common.notifier.rpc_notifier
in /etc/cinder/cinder.conf
This is explained here: https://github.com/openstack/ceilometer/blob/master/doc/source/install/development.rst
Bug report: https://bugs.launchpad.net/devstack/+bug/1210269
The list variable self.counters is created and appended to, in the storage test class StatisticsTest of tests/storage/base.py
However, the variable never does anything useful, as far as I can tell.
Bug report: https://bugs.launchpad.net/ceilometer/+bug/1210278
I deleted all references to self.counters in tests/storage/base.py in this
patch.
Patch: https://review.openstack.org/#/c/40983/
DBTestBase and EventTestBase are defined as abstract classes in tests/storage/base.py
However, neither of these classes has defined any abstract methods, so the abstract class definitions are unnecessary.
Bug report: https://bugs.launchpad.net/ceilometer/+bug/1210281
Removed abstract classes definitions from DBTestBase and EventTestBase classes in storage driver tests and also removed import statement for abstract base classes since there are now no abstract class definitions.
Patch: https://review.openstack.org/#/c/41000/
I had a general discussion with vkmc and jpich about blueprints. I asked them what they write in blueprints. Basically, they said that you should write things down so that if "you were hit by a bus", another developer could take over the blueprint. "Bus factor" is a software idea: http://en.wikipedia.org/wiki/Bus_factor
More specifically, vkmc had this to say:
It's important to add in the whiteboard all you are doing, what your fix does and which are the design decisions you took... that way it's easier to review it and to use it later
Yeah, the whiteboard is where you put all the decisions you took and ideas
Later, jd also told me that he didn't like the blueprints on Launchpad because those blueprints are not saved. He prefers to put the important information on the OpenStack wiki.
Compare these:
- The group by blueprint on Launchpad: https://blueprints.launchpad.net/ceilometer/+spec/api-group-by
- The group by blueprint discussion on the OpenStack wiki: https://wiki.openstack.org/wiki/Ceilometer/blueprints/api-group-by
vkmc's response to jd's remarks:
Probably the best way to put organized and complex information related to a blueprint would be to use a wiki
Although for short comments or as a way of communication between people interested in that bp, using the whiteboard seems more flexible
Ceilometer uses several different terms to describe how it does measurements. Some common terms are "meter", "source", "sample". These are explained in the glossary of the documentation: http://docs.openstack.org/developer/ceilometer/glossary.html
The term "counter" was used in the past, but it has been decided that this term will be deprecated in the future: http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-07-11-15.00.log.txt
However, the term counter is still all over the source code and I still have to work with it, so I wanted to understand what it meant.
Question: what are counters?
jd says:
Yes, we used the term counter at the beginning or the project to designate a measure sample. There was a lot of confusion between counter, meters, samples, etc, and since sample was more clear we decidedd toward this term.
Question: Why do we need keyed hashing for message authentication?
jd says:
That allows the collector to verify who sent the message. Anyone connecting to the message bus could send a sample if there was no signature. So the collector would store data that could have been faked. With that we're sure both the sender and the collector shares a common secret.
Question: how does (xUnit) testing know how to do the right thing, i.e. run the tests correctly?
jd says:
It's a convention/protocol. They search for every Python module that has a name that starts with "test_" and load it. Then then inspect it to check each class that inherits from the base class "testtools.TestCase". If a class inherits from this one, they instantiate it to have an object. On this object they check if it has a setUp() method, and calls if it has. Then they iterate over every methods of this object with a name starting with "test_" and calls them. Finally, it checks for a "tearDown" method and calls it if it exists.
I was confused why the Ceilometer documentation page here
http://docs.openstack.org/developer/ceilometer/install/development.html
doesn't mention setting up the Cinder configuration file, but the GitHub source file does mention it
https://github.com/openstack/ceilometer/blob/master/doc/source/install/development.rst
jd told me that the documentation at http://docs.openstack.org/developer/ceilometer/ is the documentation for Grizzly, not for the git version.
(I think they changed recently because it used to be git AFAIR)
We talked about the group by blueprint.
- group by blueprint on Launchpad: https://blueprints.launchpad.net/ceilometer/+spec/api-group-by
- group by blueprint discussion on the OpenStack wiki: https://wiki.openstack.org/wiki/Ceilometer/blueprints/api-group-by
There are at least two layers of changes needed -- one layer in the storage drivers, another layer in the API. For each layer, we need to write the tests, then write the implementation. There are three storage drivers, so we need to implement group by in SQLAlchemy, MongoDB, and HBase. However, jd said we can drop HBase for now.
I talked about my design ideas for the storage driver group by tests.
My proposal for the group by tests we need in storage:
- single field, "user-id"
- single field, "resource-id"
- single field, "project-id"
- single field, "source"
- single metadata field
- multiple fields
- multiple metadata fields
- multiple mixed fields, regular and metadata
I separated the tests cases for non-metadata and metadata fields. Metadata fields are not hardcoded ahead of time and have to be treated differently than non-metadata fields.
jd said we can drop implementing metadata fields for now. For the group by
tests involving metadata fields, I can define the tests and write in pass.
My proposal for the structure of a group by test on a single field (an example):
def test_group_by_user(self):
f = storage.SampleFilter(
meter='volume.size',
)
results = list(self.conn.get_meter_statistics(f, group=['user_id']))
self.assertEqual(...)
self.assertEqual(...)
...
self.assertEqual(...)
If we have multiple fields to group by, the call for get_meter_statistics()
would be something like:
results = list(self.conn.get_meter_statistics(f, group=['user_id', 'source']))
I could write the group by tests inside the class StatisticsTest in
tests/storage/base.py, but I don't like the test data in that class. jd told
me to instead write my own class. One reason is that I can write my own
prepare_data() method and write separate group by test data in it. The other
reason is that the Statistics class is supported for all three storage drivers
(including HBase), but at the moment, we don't plan to support group by in
HBase. If the group by tests are in their own class, it's easy to
enable/disable testing group by in HBase or any of the storage drivers.