When Python version on your Linux is outdated, compile the sources!
When the Python on Linux is outdated, compile the sources and install Python in your user directory! I give an example for Debian, and Python 3.5.2. Most newbies are scared to use a C compiler but it's easier than you think, when you have learned to interpret the error messages.
I want to install Python 3.5.2 on Debian which demands a C compiler. Lets install all the necessary tools:
> sudo apt-get install build-essential
This gives you all you need to compile C programs, but libraries. A library in C consist of a binary blob and a include file which is inserted at compile time. In C an include statement is looking like this
#include <stdio.h>
If the file isn't existent you get an error message. This is a hint you haven't install the developer files for a certain library. With Debian and Ubuntu it's easy, just search for the library name and a -dev postfix with
> apt-cache search libsqlite3
> sudo apt-get install libsqlite3-dev
The methodology is similar in different distributions. For Python we are needing
> sudo apt-get install tk-dev
> sudo apt-get install libbz2-dev
> sudo apt-get install libgdbm-dev
> sudo apt-get install ncurses-dev
> sudo apt-get install liblzma-dev
> sudo apt-get install libsqlite3-dev
> sudo apt-get install libssl-dev
> sudo apt-get install libreadline6-dev
Now download the Python source and unpack the archive
> wget https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tgz
> tar xvzf Python-3.5.2.tgz
> cd Python-3.5.2
Now you have to make the decision where to install Python. I'm using a user directory /home/<myuser>/opt, because I don't want a dirty system. By running configure Pythons source gets all the necessary information to compile the files:
> ./configure --prefix=$HOME/opt
A remark. If you for example have not installed tk-dev configure will inform you somewhere and you don't get tkinter. It's allways a good a idea to check the output of ./configure.
A
> make all
and a
> make install
completes the operation. Now add the python directory to your path. It should be a
export PATH=~/opt/bin:$PATH
in the ~/.bashrc file
If you are using numpy just install numpy as a wheel and no compiling is necessary.
> python3 -m pip install numpy
In the case you are a heavy user of scientific packages the Anaconda Python distribution is a better choice. My method saves a lot of space for virtual machines, which I'm using everyday.
/r/Python
http://redd.it/55mmso
When the Python on Linux is outdated, compile the sources and install Python in your user directory! I give an example for Debian, and Python 3.5.2. Most newbies are scared to use a C compiler but it's easier than you think, when you have learned to interpret the error messages.
I want to install Python 3.5.2 on Debian which demands a C compiler. Lets install all the necessary tools:
> sudo apt-get install build-essential
This gives you all you need to compile C programs, but libraries. A library in C consist of a binary blob and a include file which is inserted at compile time. In C an include statement is looking like this
#include <stdio.h>
If the file isn't existent you get an error message. This is a hint you haven't install the developer files for a certain library. With Debian and Ubuntu it's easy, just search for the library name and a -dev postfix with
> apt-cache search libsqlite3
> sudo apt-get install libsqlite3-dev
The methodology is similar in different distributions. For Python we are needing
> sudo apt-get install tk-dev
> sudo apt-get install libbz2-dev
> sudo apt-get install libgdbm-dev
> sudo apt-get install ncurses-dev
> sudo apt-get install liblzma-dev
> sudo apt-get install libsqlite3-dev
> sudo apt-get install libssl-dev
> sudo apt-get install libreadline6-dev
Now download the Python source and unpack the archive
> wget https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tgz
> tar xvzf Python-3.5.2.tgz
> cd Python-3.5.2
Now you have to make the decision where to install Python. I'm using a user directory /home/<myuser>/opt, because I don't want a dirty system. By running configure Pythons source gets all the necessary information to compile the files:
> ./configure --prefix=$HOME/opt
A remark. If you for example have not installed tk-dev configure will inform you somewhere and you don't get tkinter. It's allways a good a idea to check the output of ./configure.
A
> make all
and a
> make install
completes the operation. Now add the python directory to your path. It should be a
export PATH=~/opt/bin:$PATH
in the ~/.bashrc file
If you are using numpy just install numpy as a wheel and no compiling is necessary.
> python3 -m pip install numpy
In the case you are a heavy user of scientific packages the Anaconda Python distribution is a better choice. My method saves a lot of space for virtual machines, which I'm using everyday.
/r/Python
http://redd.it/55mmso
A Benchmark of matrix multiplication between C and Python
#Motivation
After a Python convention in my city (Python Brasil) me, a unqualified newbie and a friend of mine from the comp. sci. academia discussed with a few colleagues about the potential advantages of python, including its application in the scientific field for numerical applications.
One of their arguments was that runtime optimization provided by pypy offered a significant advantage over C.
Well without further ado, here are the source codes for each language.
#Source Codes
**python w/ numpy**
#! /usr/bin/python
import sys
import numpy as np
import time
n = int(sys.argv[1])
m1_file = sys.argv[2]
m2_file = sys.argv[3]
m1 = np.loadtxt(m1_file)
m2 = np.loadtxt(m2_file)
m3 = np.zeros(m1.shape)
start = time.time()
np.matmul(m1, m2, m3)
end = time.time()
print m3
print 'Time:', (end - start) * 1000.0
**python**
#! /usr/bin/python2
import sys
import time
n = int(sys.argv[1])
m1_file = sys.argv[2]
m2_file = sys.argv[3]
def readm(filename):
f = open(filename, 'r')
d = f.read()
mat = [[float(i) for i in row] for row in [s.split(' ')[0:n] for s in d.split('\n')[0:n]]]
return mat
m1 = readm(m1_file)
m2 = readm(m2_file)
m3 = [[0 for i in range(n)] for j in range(n)]
start = time.time();
for i in range(n):
for j in range(n):
for k in range(n):
m3[i][j] += (m1[i][k] * m2[k][j])
end = time.time();
for i in range(n):
for j in range(n):
print m3[i][j],
print ''
print 'Time:', (end - start) * 1000.0
**C**
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#ifdef ARRAY
void readm(FILE* f, int n, double* m) {
#endif
#ifdef MATRIX
void readm(FILE* f, int n, double** m) {
#endif
int i, j;
#ifdef ARRAY
for (i = 0; i < n * n; ++i)
fscanf(f, "%lf", &m[i]);
#endif
#ifdef MATRIX
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
fscanf(f, "%lf", &m[i][j]);
#endif
}
int main(int argc, char** argv) {
int i, j, k;
double start, end;
struct timeval tv_start, tv_end;
int n = atoi(argv[1]);
FILE* f1 = fopen(argv[2], "r");
FILE* f2 = fopen(argv[3], "r");
#ifdef ARRAY
double* m1 = (double*) malloc(sizeof(double) * n * n);
double* m2 = (double*) malloc(sizeof(double) * n * n);
double* m3 = (double*) malloc(sizeof(double) * n * n);
for (i = 0; i < n * n; ++i) m3[i] = 0;
readm(f1, n, m1);
readm(f2, n, m2);
gettimeofday(&tv_start, NULL);
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
for (k = 0; k < n; ++k)
m3[(i * n) + j] += (m1[(i * n) + k] * m2[(k * n) + j]);
gettimeofday(&tv_end, NULL);
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j)
fprintf(stderr, "%lf ", m3[(i * n) + j]);
fprintf(stderr, "\n");
}
#endif
#ifdef MATRIX
double** m1 = (double**) malloc(sizeof(double*) * n);
double** m2 = (double**) malloc(sizeof(double*) * n);
double** m3 = (double**) malloc(sizeof(double*) * n);
for (i = 0; i < n; ++i) {
m1[i] = (double*) malloc(sizeof(double) * n);
m2[i] = (double*) malloc(sizeof(double) * n);
m3[i] = (double*) malloc(sizeof(double) * n);
}
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
m3[i][j] = 0;
readm(f1, n, m1);
readm(f2, n, m2);
gettimeofday(&tv_start, NULL);
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
for (k = 0; k < n; ++k)
m3[i][j] += (m1[i][k] * m2[k][j]);
gettimeofday(&tv_end, NUL
#Motivation
After a Python convention in my city (Python Brasil) me, a unqualified newbie and a friend of mine from the comp. sci. academia discussed with a few colleagues about the potential advantages of python, including its application in the scientific field for numerical applications.
One of their arguments was that runtime optimization provided by pypy offered a significant advantage over C.
Well without further ado, here are the source codes for each language.
#Source Codes
**python w/ numpy**
#! /usr/bin/python
import sys
import numpy as np
import time
n = int(sys.argv[1])
m1_file = sys.argv[2]
m2_file = sys.argv[3]
m1 = np.loadtxt(m1_file)
m2 = np.loadtxt(m2_file)
m3 = np.zeros(m1.shape)
start = time.time()
np.matmul(m1, m2, m3)
end = time.time()
print m3
print 'Time:', (end - start) * 1000.0
**python**
#! /usr/bin/python2
import sys
import time
n = int(sys.argv[1])
m1_file = sys.argv[2]
m2_file = sys.argv[3]
def readm(filename):
f = open(filename, 'r')
d = f.read()
mat = [[float(i) for i in row] for row in [s.split(' ')[0:n] for s in d.split('\n')[0:n]]]
return mat
m1 = readm(m1_file)
m2 = readm(m2_file)
m3 = [[0 for i in range(n)] for j in range(n)]
start = time.time();
for i in range(n):
for j in range(n):
for k in range(n):
m3[i][j] += (m1[i][k] * m2[k][j])
end = time.time();
for i in range(n):
for j in range(n):
print m3[i][j],
print ''
print 'Time:', (end - start) * 1000.0
**C**
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#ifdef ARRAY
void readm(FILE* f, int n, double* m) {
#endif
#ifdef MATRIX
void readm(FILE* f, int n, double** m) {
#endif
int i, j;
#ifdef ARRAY
for (i = 0; i < n * n; ++i)
fscanf(f, "%lf", &m[i]);
#endif
#ifdef MATRIX
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
fscanf(f, "%lf", &m[i][j]);
#endif
}
int main(int argc, char** argv) {
int i, j, k;
double start, end;
struct timeval tv_start, tv_end;
int n = atoi(argv[1]);
FILE* f1 = fopen(argv[2], "r");
FILE* f2 = fopen(argv[3], "r");
#ifdef ARRAY
double* m1 = (double*) malloc(sizeof(double) * n * n);
double* m2 = (double*) malloc(sizeof(double) * n * n);
double* m3 = (double*) malloc(sizeof(double) * n * n);
for (i = 0; i < n * n; ++i) m3[i] = 0;
readm(f1, n, m1);
readm(f2, n, m2);
gettimeofday(&tv_start, NULL);
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
for (k = 0; k < n; ++k)
m3[(i * n) + j] += (m1[(i * n) + k] * m2[(k * n) + j]);
gettimeofday(&tv_end, NULL);
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j)
fprintf(stderr, "%lf ", m3[(i * n) + j]);
fprintf(stderr, "\n");
}
#endif
#ifdef MATRIX
double** m1 = (double**) malloc(sizeof(double*) * n);
double** m2 = (double**) malloc(sizeof(double*) * n);
double** m3 = (double**) malloc(sizeof(double*) * n);
for (i = 0; i < n; ++i) {
m1[i] = (double*) malloc(sizeof(double) * n);
m2[i] = (double*) malloc(sizeof(double) * n);
m3[i] = (double*) malloc(sizeof(double) * n);
}
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
m3[i][j] = 0;
readm(f1, n, m1);
readm(f2, n, m2);
gettimeofday(&tv_start, NULL);
for (i = 0; i < n; ++i)
for (j = 0; j < n; ++j)
for (k = 0; k < n; ++k)
m3[i][j] += (m1[i][k] * m2[k][j]);
gettimeofday(&tv_end, NUL
L);
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j)
fprintf(stderr, "%lf ", m3[i][j]);
fprintf(stderr, "\n");
}
#endif
start = ((double) tv_start.tv_sec * 1000.0) + ((double) tv_start.tv_usec / 1000.0);
end = ((double) tv_end.tv_sec * 1000.0) + ((double) tv_end.tv_usec / 1000.0);
printf("Time:%lf\n", end - start);
return 0;
}
**matrix generation code**
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#include <string.h>
int main(int argc, char** argv) {
srand(time(0));
int n = atoi(argv[1]);
char buf[128];
strcpy(buf, argv[1]);
strcat(buf, ".matrix");
FILE* f = fopen(buf, "w");
int i, j;
double num;
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j) {
num = (double) rand() / (double) RAND_MAX;
num = num * pow(10, rand() % 4);
fprintf(f, "%lf ", num);
}
fprintf(f, "\n");
}
fclose(f);
return 0;
}
The machine used had an i7 (this is as much as the half-assed bastard managed to tell me), he used Debian Stretch
for the python code, -O3 optimization for C.
[Full results](https://i.imgur.com/S5YC4kM.png)
[Results without standard naïve Python](https://i.imgur.com/13ZjEuK.png)
[Logarithm scale](https://i.imgur.com/TNXjyyG.png)
From my understanding, numpy simply calls a pre-compiled function for matrix multiplication that's, in of itself, implemented in C++, it's also possibly implemented using the Strassen algorithm, and/or some form of parallelization(?), which would be much faster than the naïve implementation of matrix multiplication.
Then again, I don't know much, I couldn't even figure out how to use github to do this quick post.
Sorry for the atrocious English, hope this benchmark proves itself informative :D
/r/Python
https://redd.it/79wmcw
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j)
fprintf(stderr, "%lf ", m3[i][j]);
fprintf(stderr, "\n");
}
#endif
start = ((double) tv_start.tv_sec * 1000.0) + ((double) tv_start.tv_usec / 1000.0);
end = ((double) tv_end.tv_sec * 1000.0) + ((double) tv_end.tv_usec / 1000.0);
printf("Time:%lf\n", end - start);
return 0;
}
**matrix generation code**
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#include <string.h>
int main(int argc, char** argv) {
srand(time(0));
int n = atoi(argv[1]);
char buf[128];
strcpy(buf, argv[1]);
strcat(buf, ".matrix");
FILE* f = fopen(buf, "w");
int i, j;
double num;
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j) {
num = (double) rand() / (double) RAND_MAX;
num = num * pow(10, rand() % 4);
fprintf(f, "%lf ", num);
}
fprintf(f, "\n");
}
fclose(f);
return 0;
}
The machine used had an i7 (this is as much as the half-assed bastard managed to tell me), he used Debian Stretch
for the python code, -O3 optimization for C.
[Full results](https://i.imgur.com/S5YC4kM.png)
[Results without standard naïve Python](https://i.imgur.com/13ZjEuK.png)
[Logarithm scale](https://i.imgur.com/TNXjyyG.png)
From my understanding, numpy simply calls a pre-compiled function for matrix multiplication that's, in of itself, implemented in C++, it's also possibly implemented using the Strassen algorithm, and/or some form of parallelization(?), which would be much faster than the naïve implementation of matrix multiplication.
Then again, I don't know much, I couldn't even figure out how to use github to do this quick post.
Sorry for the atrocious English, hope this benchmark proves itself informative :D
/r/Python
https://redd.it/79wmcw
Python++; The Future is Here!
main.pypp:
from pypp import *
#include <iostream>
#include <vector>
#pragma GCC optimize ("O3")
#pragma GCC target ("avx,avx2")
sync_with_stdio(False);
cin.tie(0); cout.tie(0);
cout << "C++ IO in python, guaranteed ";
cout << 200 << " % speedup" << endl;
a = 0;
b = 0;
a <<= cin;
b <<= cin;
c = "";
c <<= cin;
A = vector<<int>>(5);
A <<= cin;
B = vector<<int>>(4,2);
cout << a << ' ' << b << '\n';
cout << c << '\n';
cout << A << '\n' << B << endl;
pypp.py
import sys
/r/Python
https://redd.it/bajfy1
main.pypp:
from pypp import *
#include <iostream>
#include <vector>
#pragma GCC optimize ("O3")
#pragma GCC target ("avx,avx2")
sync_with_stdio(False);
cin.tie(0); cout.tie(0);
cout << "C++ IO in python, guaranteed ";
cout << 200 << " % speedup" << endl;
a = 0;
b = 0;
a <<= cin;
b <<= cin;
c = "";
c <<= cin;
A = vector<<int>>(5);
A <<= cin;
B = vector<<int>>(4,2);
cout << a << ' ' << b << '\n';
cout << c << '\n';
cout << A << '\n' << B << endl;
pypp.py
import sys
/r/Python
https://redd.it/bajfy1
reddit
r/Python - Python++; The Future is Here!
24 votes and 7 comments so far on Reddit
Does anyone follow the Fat Model, Skinny View design pattern in real-world projects?
We're currently dealing with some very thick views at work (think legacy startup code, lol), and I'd like to establish a convention to avoid this in the future. I've found a few resources that recommend encapsulating business logic in your models.
While I'm considering the 'Fat models, skinny views' approach, I'm also curious about any potential pitfalls or challenges that might come with it.
Could you share any insights or experiences in this regard?
Thanks in advance!
References:
- Official docs: https://docs.djangoproject.com/en/5.0/misc/design-philosophies/#include-all-relevant-domain-logic
- Best practices site I found: https://django-best-practices.readthedocs.io/en/latest/applications.html#make-em-fat
/r/django
https://redd.it/1dlxhg3
We're currently dealing with some very thick views at work (think legacy startup code, lol), and I'd like to establish a convention to avoid this in the future. I've found a few resources that recommend encapsulating business logic in your models.
While I'm considering the 'Fat models, skinny views' approach, I'm also curious about any potential pitfalls or challenges that might come with it.
Could you share any insights or experiences in this regard?
Thanks in advance!
References:
- Official docs: https://docs.djangoproject.com/en/5.0/misc/design-philosophies/#include-all-relevant-domain-logic
- Best practices site I found: https://django-best-practices.readthedocs.io/en/latest/applications.html#make-em-fat
/r/django
https://redd.it/1dlxhg3
Django Project
Design philosophies | Django documentation
The web framework for perfectionists with deadlines.