Skip to content
This repository was archived by the owner on Feb 2, 2026. It is now read-only.

Add more tests for the Agilent MSOX 3024A scope#15

Open
matthewrankin wants to merge 6 commits intopython-ivi:masterfrom
matthewrankin:add-more-tests-for-agilent-msox3024a
Open

Add more tests for the Agilent MSOX 3024A scope#15
matthewrankin wants to merge 6 commits intopython-ivi:masterfrom
matthewrankin:add-more-tests-for-agilent-msox3024a

Conversation

@matthewrankin
Copy link
Copy Markdown
Contributor

Added more unit tests for the Agilent MSOX 3024A scope.

@alexforencich
Copy link
Copy Markdown
Contributor

I don't think mocks is the right way to do this. For the simple stuff, where only _ask is used, it works. But for more complex queries (reading a waveform), it will not work very well. I think a more general solution would be to implement test versions of _read_raw and _write_raw and connect these to input and output buffers of some sort. I think putting binary strings into Queue objects would work the best. I use a similar technique to test high-level framed interfaces in MyHDL (e.g. AXI stream) and it works very well. I will look in to putting together a basic debug interface that does this.

@matthewrankin
Copy link
Copy Markdown
Contributor Author

@alexforencich Implementing test versions of _read_raw() and _write_raw and connecting these to input and output buffers would be good for integration testing. However, for unit testing, I feel that using the Python mock module to provide test stubs is appropriate since it allows us to "test a single unit in isolation, verifying that it works as expected, without considering what the rest of the program would do." (Python Testing by Daniel Arbuckle, p. 9)

As stated in this answer to the StackOverflow question "What is the difference between integration and unit tests?":

On the opposite, a Unit test testing a single method relies on the often wrong assumption that the rest of the software is correctly working, because it explicitly mocks every dependencies.

Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working.

I agree with you that having integration tests that include reading a waveform would be great to have. However, IMHO I don't feel they negate the need for unit tests. As an example, unit testing helped me find issue #14 and would have helped in finding issues #10 and #8.

@alexforencich
Copy link
Copy Markdown
Contributor

I have two issues with using mocks like this. First is that there is more than one way to implement the actual communication to the instrument. The implementation could use _ask, _read and _write, or _read_raw and _write_raw. If you only mock _ask, then the test will fail if the implementation is changed to use the API in a different, but equivalent, manner. My idea is basically to do a higher-level mock that will hopefully be more generic and more maintainable. Second is that I am working on some architectural changes that will eliminate many of the getters and setters that you're writing unit tests for.

Implementing tests this way also adds a very significant amount of redundancy to the code, and most of those issues you have discovered stem from many of the redundancies in the current architecture of Python IVI. I'm working on ways of reducing that right now, especially caching, which would basically eliminate the possibility of #14 occuring again in most cases. I'm also working in an ultimate solution to #10 at the same time. This may eliminate many strict unit tests as the getters and setters that they test will be moved into generic getters and setters. However, there is a great need for integration level tests that check to make sure the correct commands are sent to the instrument and the correct responses are received, independent on the precise implementation of each command in the driver.

@matthewrankin
Copy link
Copy Markdown
Contributor Author

@alexforencich Thanks for both the discourse on this topic (you're making me think through some things that I hadn't before), and thanks for your work on the code. If I remove the unit tests containing mocks and refactor the other unit tests into separate files matching each module used—agilentMSOX3024A.py, agilent3000A.py, agilent2000A.py, etc.—would you accept that pull request?

I hope you don't mind continuing our discussion. Below are a few comments. I'd appreciate your feedback.

If you only mock _ask, then the test will fail if the implementation is changed to use the API in a different, but equivalent, manner.

If the implementation changes, I agree that the unit test would need to be updated. If a generic mock were used and the implementation changed from _ask to _read, what happens when down the road that _ask and _read are changed to return different types of objects—one a binary string literal and the other a unicode string literal? Wouldn't we want the test to stub the implementation as used so we know if there is an implementation change?

Implementing tests this way also adds a very significant amount of redundancy to the code, and most of those issues you have discovered stem from many of the redundancies in the current architecture of Python IVI.

Respectfully, I disagree with your statement that bugs in code "stem from many of the redundancies in the current architecture." Bugs stem from typos, logic errors, syntax errors, etc. All of which could be present regardless of architecture. I see unit tests as reducing the risk of introducing errors when refactoring code. I have no fear of refactoring when my code has unit tests; however, I have great trepidation when refactoring code without unit tests.

However, there is a great need for integration level tests that check to make sure the correct commands are sent to the instrument and the correct responses are received, independent on the precise implementation of each command in the driver.

I agree there is a great need for integration level tests, but I see integration tests as being in addition to unit tests not in place of them.

@alexforencich
Copy link
Copy Markdown
Contributor

Certainly. I think Python IVI most certainly needs tests, we just need to figure out the best way to implement them.

I agree with your assessment on bugs. I was referring specifically to the couple of bugs that you found as an example of where redundancy in Python IVI makes those types of bugs possible. Take a look at any Python IVI driver and you will see that 90% of the properties are exactly the same, excepting the names and commands. Wouldn't it be simpler just to store the name and commands in some sort of data structure and then use one common implementation for the actual code? This would eliminate the problem of storing the result in the wrong spot for most of the properties. I am also considering completely overhauling the caching mechanism by adding a shim between the property access and the _get or _set method call that checks to see if the value is in the cache before calling _get so the _get method will never be called if the value is in the cache. However, this may be difficult to integrate with existing drivers. On the other hand, there is something to be said for not having all of this 'behind the scenes' code as it will make things more convoluted. The other thing is I want to streamline how the values passed to _set routines are processed to properly support mappings and range checks in a streamlined manner.

In terms of tests, I think unit tests for python-ivi and integration tests for the drivers would be ideal. The driver tests should be able to assume that python-ivi is working correctly. This would be the easiest to maintain as drivers could be significantly reworked without changing gobs and gobs of unit tests. I want to be able to modify a driver for an instrument I do not have access to, but verify that the changes will not break the driver.

I put together what I have in mind right now for a driver test: https://github.com/python-ivi/python-ivi/blob/master/ivi/agilent/test/test_agilent34401A.py . It probably needs a lot of work, but that's sort of the kind of test I would like to see for the drivers. Then, we can implement tests for specific instruments that we can compare against. If the tests go this route, would it be better to bury them in the tree and import the single module, or put them in /tests and import the whole of ivi?

Also, I am looking in to enabling Travis CI for Python IVI, once we get the testing figured out. I have tox working with the tests I have so far on python 2.6, 2.7, 3.3, and 3.4.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants