try:
import module
except ImportError:
import slow_module as module
Conditional support testing would also break, like having tests which only run if module2 is available: try:
import module2
except ImportError:
def if_has_module2(f):
return unittest.skip("module2 not available")(f)
else:
def if_has_module2(f):
return f
@if_has_module2
class TestModule2Bindings(....
The proto-PEP also gives an example of using with suppress_warnings():
import module3
where some global configuration changes only during import.In general, "import this" and "import antigravity" - anything with import side-effects - would stop working.
Oh, and as the proto-PEP points out, changes to sys.path and others can cause problems because of the delay between time of lazy import and time of resolution.
Then there's code where you do a long computation then make use of a package which might not be present.
import database # remember to install!!
import qcd_simulation
universe = qcd_simulation.run(seconds = 30*24*60*60)
database.save(universe)
All of these would be replaced with "import module; module.__name__" or something to force the import, or by an explicit use to __import__.This can already happen with non top level imports so it is not a necessarily a new issue, but could become more prevalent if there is an overall uptake in this feature for optional dependencies.
I have zero concerns about this PEP and look forward to its implementation.
>>> import nonexistent_module
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
import nonexistent_module
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1322, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1262, in _find_spec
File "<python-input-0>", line 8, in find_spec
base.loader = LazyLoader(base.loader)
^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'loader'
The implementation should probably convert that exception back to ImportError for you, but the point is that the absence of an implementation can still be detected eagerly while the actual loading occurs lazily.I have bad memories of using a network filesystem where my Python app's startup time was 5 or more seconds because of all the small file lookups for the import were really slow.
I fixed it by importing modules in functions, only when needed, so the time went down to less than a second. (It was even better using a zipimport, but for other reasons we didn't use that option.)
If I understand things correctly, your code would have the same several-second delay as it tries to resolve everything?
(Trying to do "fallback" logic with lazily-loaded modules is also susceptible to race conditions, of course. What if someone defines the module before you try to use it?)
edit: ok well "xxx in sys.modules" would indeed be a problem
In fact, all the code you see in the module is "side effects", in a sense. A `class` body, for example, has to actually run at import time, creating the class object and attaching it as an attribute of the module object. Similarly for functions. Even a simple assignment of a constant actually has to run at module import. And all of these things add up.
Further, if there isn't already cached bytecode available for the module, by default it will be written to disk as part of the import process. That's inarguably a side effect.
Sure thing you can declare globals variable and run anything on a module file global scope (outside funcs and class body), but even that 'global' scope is just an illusion, and everything declared there, as yourself said, is scoped to the module's namespace
(and you can't leak the 'globals' when importing the module unless you explicity do so 'from foo import *'. Think of python's import as eval but safer because it doesn't leaks the results from the module execution)
So for a module to have side-effect (for me) it would either:
- Change/Create attributes from other modules
- Call some other function that does side-effect (reflection builtins? IO stuff)