-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Default to no multiprocessing. #319
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -27,7 +27,9 @@ class DataMode(Enum): | |
|
||
|
||
def new_data(location=None, loc_record=None, name=None, overwrite=False, | ||
io=None, data_manager=None, mode=DataMode.LOCAL, **kwargs): | ||
io=None, data_manager=False, mode=DataMode.LOCAL, **kwargs): | ||
# NOTE(giulioungaretti): leave this docstrings as it is, because | ||
# documenting the types is silly in this case. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't follow, who is this note for? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for me :D |
||
""" | ||
Create a new DataSet. | ||
|
||
|
@@ -57,11 +59,11 @@ def new_data(location=None, loc_record=None, name=None, overwrite=False, | |
says the root data directory is the current working directory, ie | ||
where you started the python session. | ||
|
||
data_manager (DataManager or False, optional): manager for the | ||
``DataServer`` that offloads storage and syncing of this | ||
``DataSet``. Usually omitted (default None) to use the default | ||
from ``get_data_manager()``. If ``False``, this ``DataSet`` will | ||
store itself. | ||
data_manager (Optional[bool]): use a manager for the | ||
``DataServer`` that offloads storage and syncing of this Defaults | ||
to ``False`` i.e. this ``DataSet`` will store itself without extra | ||
processes. Set to ``True`` to use the default from | ||
``get_data_manager()``. | ||
|
||
mode (DataMode, optional): connection type to the ``DataServer``. | ||
``DataMode.LOCAL``: this DataSet doesn't communicate across | ||
|
@@ -105,11 +107,11 @@ def new_data(location=None, loc_record=None, name=None, overwrite=False, | |
if location and (not overwrite) and io.list(location): | ||
raise FileExistsError('"' + location + '" already has data') | ||
|
||
if data_manager is False: | ||
if data_manager is True: | ||
data_manager = get_data_manager() | ||
else: | ||
if mode != DataMode.LOCAL: | ||
raise ValueError('DataSets without a data_manager must be local') | ||
elif data_manager is None: | ||
data_manager = get_data_manager() | ||
|
||
return DataSet(location=location, io=io, data_manager=data_manager, | ||
mode=mode, **kwargs) | ||
|
@@ -206,11 +208,11 @@ class DataSet(DelegateAttributes): | |
says the root data directory is the current working directory, ie | ||
where you started the python session. | ||
|
||
data_manager (DataManager or False, optional): manager for the | ||
``DataServer`` that offloads storage and syncing of this | ||
``DataSet``. Usually omitted (default None) to use the default | ||
from ``get_data_manager()``. If ``False``, this ``DataSet`` will | ||
store itself. | ||
data_manager (Optional[bool]): use a manager for the | ||
``DataServer`` that offloads storage and syncing of this Defaults | ||
to ``False`` i.e. this ``DataSet`` will store itself without extra | ||
processes. Set to ``True`` to use the default from | ||
``get_data_manager()``. | ||
|
||
mode (DataMode, optional): connection type to the ``DataServer``. | ||
``DataMode.LOCAL``: this DataSet doesn't communicate across | ||
|
@@ -258,7 +260,7 @@ class DataSet(DelegateAttributes): | |
background_functions = OrderedDict() | ||
|
||
def __init__(self, location=None, mode=DataMode.LOCAL, arrays=None, | ||
data_manager=None, formatter=None, io=None, write_period=5): | ||
data_manager=False, formatter=None, io=None, write_period=5): | ||
if location is False or isinstance(location, str): | ||
self.location = location | ||
else: | ||
|
@@ -281,7 +283,7 @@ def __init__(self, location=None, mode=DataMode.LOCAL, arrays=None, | |
for array in arrays: | ||
self.add_array(array) | ||
|
||
if data_manager is None and mode in SERVER_MODES: | ||
if data_manager is True and mode in SERVER_MODES: | ||
data_manager = get_data_manager() | ||
|
||
if mode == DataMode.LOCAL: | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,8 @@ | ||
"""Instrument base class.""" | ||
import weakref | ||
import time | ||
import logging | ||
import time | ||
import warnings | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. unused addition? |
||
import weakref | ||
|
||
from qcodes.utils.metadata import Metadatable | ||
from qcodes.utils.helpers import DelegateAttributes, strip_attrs, full_class | ||
|
@@ -75,11 +76,12 @@ class Instrument(Metadatable, DelegateAttributes, NestedAttrAccess): | |
|
||
_all_instruments = {} | ||
|
||
def __new__(cls, *args, server_name='', **kwargs): | ||
def __new__(cls, *args, server_name=None, **kwargs): | ||
"""Figure out whether to create a base instrument or proxy.""" | ||
if server_name is None: | ||
return super().__new__(cls) | ||
else: | ||
warnings.warn("Multiprocessing is in beta, use at own risk", UserWarning) | ||
return RemoteInstrument(*args, instrument_class=cls, | ||
server_name=server_name, **kwargs) | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -50,6 +50,7 @@ | |
import multiprocessing as mp | ||
import time | ||
import numpy as np | ||
import warnings | ||
|
||
from qcodes.station import Station | ||
from qcodes.data.data_set import new_data, DataMode | ||
|
@@ -62,7 +63,10 @@ | |
from .actions import (_actions_snapshot, Task, Wait, _Measure, _Nest, | ||
BreakIf, _QcodesBreak) | ||
|
||
# Switches off multiprocessing by default, cant' be altered after module import. | ||
# TODO(giulioungaretti) use config. | ||
|
||
USE_MP = False | ||
MP_NAME = 'Measurement' | ||
|
||
|
||
|
@@ -640,7 +644,7 @@ def _check_signal(self): | |
else: | ||
raise ValueError('unknown signal', signal_) | ||
|
||
def get_data_set(self, data_manager=None, *args, **kwargs): | ||
def get_data_set(self, data_manager=False, *args, **kwargs): | ||
""" | ||
Return the data set for this loop. | ||
If no data set has been created yet, a new one will be created and returned. | ||
|
@@ -670,6 +674,7 @@ def get_data_set(self, data_manager=None, *args, **kwargs): | |
if data_manager is False: | ||
data_mode = DataMode.LOCAL | ||
else: | ||
warnings.warn("Multiprocessing is in beta, use at own risk", UserWarning) | ||
data_mode = DataMode.PUSH_TO_SERVER | ||
|
||
data_set = new_data(arrays=self.containers(), mode=data_mode, | ||
|
@@ -688,20 +693,19 @@ def run_temp(self, **kwargs): | |
return self.run(background=False, quiet=True, | ||
data_manager=False, location=False, **kwargs) | ||
|
||
def run(self, background=True, use_threads=True, quiet=False, | ||
data_manager=None, station=None, progress_interval=False, | ||
def run(self, background=USE_MP, use_threads=False, quiet=False, | ||
data_manager=USE_MP, station=None, progress_interval=False, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a bit confusing - it looks like you've made it configurable, but it only takes the value of Also, if you were to set There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @alexcjohnson yes that's by design, because it will be configured at module import, with a config system. And one won't change it after loading. And yes, that's not a consistent API, will look into it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK, I would have made it configurable at run time, but we can leave it this way if we document that clearly in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you can still run with multiprocessing, by setting it at run time, no ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes of course, it's not persistent but you can access this behavior on a run-by-run basis. Just make the behavior clear in the docstrings so someone looking at the source doesn't see |
||
*args, **kwargs): | ||
""" | ||
Execute this loop. | ||
|
||
background: (default True) run this sweep in a separate process | ||
background: (default False) run this sweep in a separate process | ||
so we can have live plotting and other analysis in the main process | ||
use_threads: (default True): whenever there are multiple `get` calls | ||
back-to-back, execute them in separate threads so they run in | ||
parallel (as long as they don't block each other) | ||
quiet: (default False): set True to not print anything except errors | ||
data_manager: a DataManager instance (omit to use default, | ||
False to store locally) | ||
data_manager: set to True to use a DataManager. Default to False. | ||
station: a Station instance for snapshots (omit to use a previously | ||
provided Station, or the default Station) | ||
progress_interval (default None): show progress of the loop every x | ||
|
@@ -737,6 +741,7 @@ def run(self, background=True, use_threads=True, quiet=False, | |
prev_loop.join() | ||
|
||
data_set = self.get_data_set(data_manager, *args, **kwargs) | ||
|
||
self.set_common_attrs(data_set=data_set, use_threads=use_threads, | ||
signal_queue=self.signal_queue) | ||
|
||
|
@@ -762,6 +767,7 @@ def run(self, background=True, use_threads=True, quiet=False, | |
flush=True) | ||
|
||
if background: | ||
warnings.warn("Multiprocessing is in beta, use at own risk", UserWarning) | ||
p = QcodesProcess(target=self._run_wrapper, name=MP_NAME) | ||
p.is_sweep = True | ||
p.signal_queue = self.signal_queue | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,6 +31,7 @@ def __init__(self, *args, **kwargs): | |
vals=Numbers(-10, 10), step=0.2, | ||
delay=0.01, | ||
max_delay='forever') | ||
self.crahs() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @giulioungaretti I missed this earlier - what's it about? We shouldn't ever get here, as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah it's some leftovers, this pr took too long :D |
||
|
||
|
||
class GatesBadDelayValue(MockGates): | ||
|
@@ -51,9 +52,9 @@ class TestInstrument(TestCase): | |
def setUpClass(cls): | ||
cls.model = AMockModel() | ||
|
||
cls.gates = MockGates(model=cls.model) | ||
cls.source = MockSource(model=cls.model) | ||
cls.meter = MockMeter(model=cls.model, keep_history=False) | ||
cls.gates = MockGates(model=cls.model, server_name='') | ||
cls.source = MockSource(model=cls.model, server_name='') | ||
cls.meter = MockMeter(model=cls.model, keep_history=False, server_name='') | ||
|
||
def setUp(self): | ||
# reset the model state via the gates function | ||
|
@@ -192,7 +193,7 @@ def test_remove_instance(self): | |
with self.assertRaises(KeyError): | ||
Instrument.find_instrument('gates') | ||
|
||
type(self).gates = MockGates(model=self.model) | ||
type(self).gates = MockGates(model=self.model, server_name="") | ||
self.assertEqual(self.gates.instances(), [self.gates]) | ||
self.assertEqual(Instrument.find_instrument('gates'), self.gates) | ||
|
||
|
@@ -941,7 +942,7 @@ def setUp(self): | |
name='testdummy', gates=['dac1', 'dac2', 'dac3'], server_name=None) | ||
|
||
def tearDown(self): | ||
#TODO (giulioungaretti) remove ( does nothing ?) | ||
# TODO (giulioungaretti) remove ( does nothing ?) | ||
pass | ||
|
||
def test_attr_access(self): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really want to fail fast on travis? I'd think a full failure report is more useful.
Also, if we're not supporting multiprocessing for now anyway, is it really worthwhile running the tests in both modes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, because there are still people that just use multiprocessing, and knowing when we break things is useful.
The -f was just so that people read the output, it seems like it's always ignored, so maybe if it's shorted it's easier ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, I thought I was going to see that this PR disabled the multiprocessing tests but it didn't :) Good, so we leave both modes in.
If tests pass (which should be a requirement from now on, yes?), this doesn't shorten it at all. All it really does is shorten a horrendous multi-test failure into a single failure - which is enough to disqualify the commit, and it's nice that you get the feedback sooner, but I would still think the debugging cycle would be quicker if you could see all the failures at once. Of course it would be silly to debug only this way - after you see the error(s) on Travis then you test locally while fixing... dunno, if you prefer this way then leave it, in fact you could even go farther and chain the two commands so if the first fails, the spawn version doesn't even run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexcjohnson it shortens the output of the failure, in general people should test locally but I bet most don't .
We could remove the multiprocessing tests all together :D But that's for another PR.