Let\'s say I have the following directory structure:
a\\
__init__.py
b\\
__init__.py
c\\
__init__.py
c_file.p
The problem is that when running from a directory, by default only the packages that are sub directories are visible as candidate imports, so you cannot import a.b.d. You can however import b.d. since b is a sub package of a.
If you really want to import a.b.d in c/__init__.py
you can accomplish this by changing the system path to be one directory above a and change the import in a/__init__.py
to be import a.b.c.
Your a/__init__.py
should look like this:
import sys
import os
# set sytem path to be directory above so that a can be a
# package namespace
DIRECTORY_SCRIPT = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0,DIRECTORY_SCRIPT+"/..")
import a.b.c
An additional difficulty arises when you want to run modules in c as scripts. Here the packages a and b do not exist. You can hack the __int__.py
in the c directory to point the sys.path to the top-level directory and then import __init__
in any modules inside c to be able to use the full path to import a.b.d. I doubt that it is good practice to import __init__.py
but it has worked for my use cases.