python-internals

Why doesn't Python optimize away temporary variables?

帅比萌擦擦* 提交于 2019-12-22 08:17:38
问题 Fowler's Extract Variable refactoring method, formerly Introduce Explaining Variable, says use a temporary variable to make code clearer for humans. The idea is to elucidate complex code by introducing an otherwise unneeded local variable, and naming that variable for exposition purposes. It also advocates this kind of explaining over comments.. Other languages optimize away temporary variables so there's no cost in time or space resources. Why doesn't Python do this? In [3]: def multiple_of

Large memory footprint of integers compared with result of sys.getsizeof()

ぃ、小莉子 提交于 2019-12-22 07:59:43
问题 Python-Integer-objects in the range [1,2^30) need 28 byte, as provided by sys.getsizeof() and explained for example in this SO-post. However, when I measure the memory footprint with the following script: #int_list.py: import sys N=int(sys.argv[1]) lst=[0]*N # no overallocation for i in range(N): lst[i]=1000+i # ints not from integer pool via /usr/bin/time -fpeak_used_memory:%M python3 int_list.py <N> I get the following peak memory values (Linux-x64, Python 3.6.2): N Peak memory in Kb bytes

Large memory footprint of integers compared with result of sys.getsizeof()

你说的曾经没有我的故事 提交于 2019-12-22 07:59:12
问题 Python-Integer-objects in the range [1,2^30) need 28 byte, as provided by sys.getsizeof() and explained for example in this SO-post. However, when I measure the memory footprint with the following script: #int_list.py: import sys N=int(sys.argv[1]) lst=[0]*N # no overallocation for i in range(N): lst[i]=1000+i # ints not from integer pool via /usr/bin/time -fpeak_used_memory:%M python3 int_list.py <N> I get the following peak memory values (Linux-x64, Python 3.6.2): N Peak memory in Kb bytes

Embedding Python in C: Error in linking - undefined reference to PyString_AsString

风流意气都作罢 提交于 2019-12-22 06:46:29
问题 I am trying to embed a python program inside a C program. My OS is Ubuntu 14.04 I try to embed python 2.7 and python 3.4 interpreter in the same C code base (as separate applications). The compilation and linking works when embedding python 2.7 but not for the python 3.4. It fails during the linker stage. Here is my C code (just an example not real code) simple.c #include <stdio.h> #include <Python.h> int main(int argc, char *argv[]) { PyObject *pName, *pModule, *pFunc, *pValue; char module[]

How int() object uses “==” operator without __eq__() method in python2?

本秂侑毒 提交于 2019-12-22 01:58:09
问题 Recently I read the "Fluent python" and understood how == operator works with python objects, using __eq__() method. But how it works with int instances in python2? >>> a = 1 >>> b = 1 >>> a == b True >>> a.__eq__(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute '__eq__' in python3 all a.__eq__(b) returns True 回答1: Python prefers to use rich comparison functions ( __eq__ , __lt__ , __ne__ , etc.), but if those don't exist,

How int() object uses “==” operator without __eq__() method in python2?

强颜欢笑 提交于 2019-12-22 01:55:36
问题 Recently I read the "Fluent python" and understood how == operator works with python objects, using __eq__() method. But how it works with int instances in python2? >>> a = 1 >>> b = 1 >>> a == b True >>> a.__eq__(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute '__eq__' in python3 all a.__eq__(b) returns True 回答1: Python prefers to use rich comparison functions ( __eq__ , __lt__ , __ne__ , etc.), but if those don't exist,

Why doesn't the namedtuple module use a metaclass to create nt class objects?

不想你离开。 提交于 2019-12-22 01:34:22
问题 I spent some time investigating the collections.namedtuple module a few weeks ago. The module uses a factory function which populates the dynamic data (the name of the new namedtuple class, and the class attribute names) into a very large string. Then exec is executed with the string (which represents the code) as the argument, and the new class is returned. Does anyone know why it was done this way, when there is a specific tool for this kind of thing readily available, i.e. the metaclass? I

Why close a cursor for Sqlite3 in Python

青春壹個敷衍的年華 提交于 2019-12-21 17:41:50
问题 Is there any benefit to closing a cursor when using Python's sqlite3 module? Or is it just an artifact of the DB API v2.0 that might only do something useful for other databases? It makes sense that connection.close() releases resources; however, it is unclear what cursor.close() actually does, whether it actually releases some resource or does nothing. The docs for it are unenlightening: >>> import sqlite3 >>> conn = sqlite3.connect(':memory:') >>> c = conn.cursor() >>> help(c.close) Help on

class attribute lookup rule?

不打扰是莪最后的温柔 提交于 2019-12-21 12:59:37
问题 >>> class D: ... __class__ = 1 ... __name__ = 2 ... >>> D.__class__ <class 'type'> >>> D().__class__ 1 >>> D.__name__ 'D' >>> D().__name__ 2 Why does D.__class__ return the name of the class, while D().__class__ returns the defined attribute in class D? And from where do builtin attributes such as __class__ and __name__ come from? I suspected __name__ or __class__ to be simple descriptors that live either in object class or somewhere, but this can't be seen. In my understanding, the attribute

Complexity of len() with regard to sets and lists

独自空忆成欢 提交于 2019-12-20 17:32:39
问题 The complexity of len() with regards to sets and lists is equally O(1). How come it takes more time to process sets? ~$ python -m timeit "a=[1,2,3,4,5,6,7,8,9,10];len(a)" 10000000 loops, best of 3: 0.168 usec per loop ~$ python -m timeit "a={1,2,3,4,5,6,7,8,9,10};len(a)" 1000000 loops, best of 3: 0.375 usec per loop Is it related to the particular benchmark, as in, it takes more time to build sets than lists and the benchmark takes that into account as well? If the creation of a set object