Skip to main content

How to deploy a Django app in Render.com (free tier)

How to deploy a Django app in Render.com (free tier)

This is a basic tutorial on how to deploy a Django app to Render.com, focused on using only the free tier features and automating things as much as possible. This also doesn't cover how to do a Django app, it assumes you already have a working app that you just need to deploy.

Assumptions

  • You have a working Django app (for instance, you can already run it locally with python manage.py runserver and everything works fine).

  • You're using Django 4 or newer (tested up to Django 4.2.4), and your project structure is what Django does by default.

  • Your Django project is in a Github or Gitlab repository.

  • There's a single Django project in the repo, not multiple projects at once.

  • You want to use Postgres as your production database in Render.

0: Render.com account

Just go to Render.com, create a user (if you don't already have one), activate your account, and sign in to the Dashboard.

1: Defining dependencies

Create a requirements.txt file in the root of your repository, and add these dependencies in it:

django==5.1
dj-database-url==2.2.0
psycopg2-binary==2.9.9
whitenoise[brotli]==6.7.0
gunicorn==23.0.0

If you already have a requirements.txt, just add the new packages to it.

2: Creating deploy scripts

You will need two scripts: one that builds the web app server (installs dependencies, updates the database structure, etc), and another one that runs your web app. Render will use these two scripts when you want to deploy your web app.

Create a build.sh file in the root of your repository, with these contents:

# exit on error
set -o errexit

pip install -r ./requirements.txt

cd $(dirname $(find . | grep manage.py$))
python manage.py collectstatic --no-input
python manage.py migrate
python manage.py createsuperuser --username admin --email "YOUR@EMAIL.com" --noinput || true

In that script, replace YOUR@EMAIL.com with your real email.

Then create a run.sh file in the root of your repository, with these contents:

# exit on error
set -o errexit

cd $(dirname $(find . | grep manage.py$))
gunicorn $(dirname $(find . | grep wsgi.py$) | sed "s/\.\///g").wsgi:application

If you have a less standard project or repo structure, you can replace the dark magic in those scripts: the cd command just needs to get inside your Django project folder, and the gunicorn command needs to look something like gunicorn your_project_name.wsgi:application. But for normal project structures, the dark magic should work just fine :)

3. Django settings for Render

Now you need to add this to the end of your settings.py file:

# code needed to deploy in Render.com:
import os
import dj_database_url

if 'RENDER' in os.environ:
    print("USING RENDER.COM SETTINGS!")
    DEBUG = False
    ALLOWED_HOSTS = [os.environ.get('RENDER_EXTERNAL_HOSTNAME')]
    DATABASES = {'default': dj_database_url.config(conn_max_age=600)}
    MIDDLEWARE.insert(MIDDLEWARE.index('django.middleware.security.SecurityMiddleware') + 1,
                      'whitenoise.middleware.WhiteNoiseMiddleware')
    STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
    STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

This code basically overrides some of the settings of your project, so it can work well with what we are using inside Render.com: the database, the static files backend, etc. But it only does that when it detects your project is running inside Render, otherwise it does nothing to your settings.

You can further customize this if you have other settings that should have different values when running in Render.com. Just remember to never put secret stuff in there, because this will be committed to your repo. If you need to read any secret keys or values, you can use os.environ.get('MY_SECRET_THING_XYZ') and then define the value for that environment variable in the Render dashboard.

4. Commit everything!!

Commit all your new files and modified files, and push the changes to your Github/Gitlab repo!

5. Creating your Postgres database

Go to Render's dashboard and create a new Postgres database using this menu:

/images/deploy-django-render/db_create_menu.thumbnail.png

You will need to specify the database name and a few other fields. After you created your database, open its details page from the dashboard and copy the value from this field, to use it in the next step:

/images/deploy-django-render/db_url_field.thumbnail.png

6. Deploying your app at Render

Go again to Render's dashboard and create a new "Web service" using this menu:

/images/deploy-django-render/web_create_menu.thumbnail.png

In the first page you will need to either fill the url of a public Github or Gitlab repository, or login with your Github/Gitlab account to choose a private repository. After you have specified your repo, in the next page you will need to fill out a few fields:

/images/deploy-django-render/web_create_form.thumbnail.png
  • Name: important, this will be part of the url of your deployed web app, so use something meaningful.

  • Language: must be Python 3.

  • Branch: the branch of your repo from which the code should be cloned to be deployed. Usually just main.

  • Root Directory: important to leave this empty, so all the scripts are executed from the root directory.

  • Build command: here you will use your build script: bash build.sh

  • Start command: and here you will use your run script: bash run.sh

  • Instance type: Free works fine for small Django apps.

Scroll down, to the "Environment Variables" section, and add three environment variables (it's important that the names are UPPERCASE):

  • DATABASE_URL: here you need to paste the database url that you copied at step 5. You can go to the database details to copy it again if needed.

  • DJANGO_SUPERUSER_PASSWORD: here set a password that you want to use for your Django superuser.

  • PYTHON_VERSION: set it to 3.11.0 (or newer?).

/images/deploy-django-render/web_env.thumbnail.png

And finally, just hit the "Deploy Web Service" button. Your website should be built and deployed in a couple of minutes! :)

After the deploy finishes, your website should be ready at https://YOUR_RENDER_WEB_SERVICE_NAME.onrender.com/

Re-deploying new versions

Now you can re-attempt any deploy, or manually deploy any version you wish. Just use this menu and everything should work:

/images/deploy-django-render/web_deploy.thumbnail.png

What's next?

  • Anytime you need to deploy a new version, you just push it to your repo, and you can use the manual deploy menu to re-deploy. You can even configure your Render app to use a different branch from your repository, so you can deploy from a "stable" branch instead.

  • The rest of Render's UI is pretty straightforward, explore it! There are plenty of useful things even in the free tier, including logs, usage metrics, etc.

  • The free tier doesn't include the web shell to access your running app directly, but you can still connect to it via ssh using the "Connect" button to the side of the ""Manual Deploy" one.

On AI killing art, and other fears

A few hours ago, a friend and ex-student of mine asked me this on twitter:

/images/art-killer-ai/lucas-question.thumbnail.png

Basically, what were my thoughts on the artists vs content-generation AIs debate.

I really wanted to answer in a couple of tweets. Believe me, I tried. But I can't, so instead of torturing him and any spectators with a kilometer-long twitter thread, I decided to make this a short-ish blog post :)

Disclaimer

My opinion on this topic is probably not the most objective one. I work with AI, I teach in a couple of university AI courses. I even do some AI projects for fun in my spare time.

But at the same time, I care deeply for the ethics of AI and its impact in the world. I'm not a blind technology worshiper, I really want AI to be a tool, used to build a better future for all, and I'm painfully aware that in its current forms it is already being misused a lot (concentrating power, amplifying discrimination, invading privacy, making people adicted to products, and so many other bad things). So I won't defend AI for the sake of AI progress.

Still, don't forget that I'm not an artist, and artists should have a voice in this debate too.

A misconception

Before sharing any opinions I need to dispel a common misconception: that these AI models are plagiarizing content from artists. I'll be using image generation as the main example, but the same applies to lots of content-generation AIs.

These AIs don't plagiarize (most of them, the amazing ones making the news at least). These AIs aren't memorizing, copying existing work and reproducing it as their own. That's simply not how they work. I won't go deep into that, but what you need to understand is that these AI models learn (concepts, styles, techniques, etc), and then generate novel content based on what they've learned.

The process is in many ways similar to how humans learn skills: the AI learns by watching a ton of examples, trying to do its own stuff, and receiving feedback on how good the stuff was. Given enough time, the AI model ends up being pretty good at the thing you wanted it to do.

Learning styles, concepts, techniques, and more from the contents (art) of others, to then be able to create your own novel art, was never considered “plagiarizing”. And it would be a VERY bad idea to try to classify that as plagiarizing. Basically because that’s how human artists work too, you would end up banning the artistic process itself, because no artist is born from the ether knowing how to create art, nor is any artist able to create a new branch of art from nothing. We humans always learn from others. And now computers too.

With that out of the way, now my opinion on the four biggest fears/discussions I’ve seen repeated multiple times:

1. AI art will destroy human art

/images/art-killer-ai/robot_artist.thumbnail.png

I’ve seen quite a few people arguing that when computers can do art faster and more efficiently than humans, human art itself will die. There will be no more human artists, computers will replace that as they’ve replaced so many other manual tasks.

Honestly, I found this one hilarious. Mostly because people saying this sound like they have never spoken to a real, human artist.

Humans do art for lots of different reasons. Love, pleasure, to express feelings, to share ideas, to call for action, to relax, to improve themselves, to have fun, and so much more. People enjoy CREATING art, not just consuming it! If a computer can now create art too, so what? Most artists will keep doing art anyway because they feel amazing when doing their own art!

Would you stop eating ice cream just because a new computer can eat ice cream faster than you? How silly is that?

Human art won’t die. Stop panicking over that one.

2. AI art will destroy paid human artists

/images/art-killer-ai/robot_money.thumbnail.png

This one might sound similar, but is not the same as the previous fear. Even though sometimes in discussions they’re both mixed together as one.

I absolutely believe human artists will keep doing art for the love of it. But there’s a catch: not all art/content is produced, and especially sold, just for the love of art itself. Some forms of content are produced and paid just for utilitary reasons… and those are at risk of some huge impact, in my opinion.

The person buying a painting out of admiration for the person creating it, or listening to a song for the deep connection they feel with the artist, will probably keep consuming their favorite human’s art even if the AI produced art is cheaper or more easily available. The human author is part of the reason they want to consume that particular art form, and the AI art isn’t replacing that human connection.

But the person buying a cartoon for an ad of a children’s toy? Or the person needing a background song for a youtube video? Or the person designing a christmas card for their company? Well, those will definitely buy cheap AI art instead of paying for expensive human handcrafted art. And I’m pretty sure AI art will be cheaper, and easier to get. Maybe even for free.

So, yes, there will be an impact on some artists' income, we can’t deny that. It’s not a new problem, we’ve had a similar problem in the past: radio vs local musicians, widespread movie theaters and tv vs theater actors, ice sellers vs widespread fridges, factory workers vs robots, and so on.

IMHO, this is not just a problem in art: AI and other forms of automation will keep replacing humans in lots of jobs. That trend is not going anywhere, and will probably even accelerate. So the sooner we evolve into a society where people don’t need a job to be able to eat, the better…

So, human artists disappearing? Not a chance. Human artists having trouble selling “utilitary” art? Yep, that will happen.

3. Permission to learn

/images/art-killer-ai/padlock_book.thumbnail.png

Another big discussion being held in many places, relates to the basic question of the fairness (or even legality) of using other people’s art/content to train your own AI model. Even if the model doesn’t plagiarize the content, just the act of learning from it, some people argue, should require some form of compensation or permission at least. Even being able to prevent that from happening.

This, I think, is a well intentioned discussion with a terribly short-sighted proposal for a solution, that will absolutely backfire if implemented. And I think that because of two different reasons:

First:

Humans have always been able to learn from what they can see, for free. Yes, on many occasions you are required to pay to be able to access something, and then you can use that to learn (and I really hate that humanity is like that… I strongly believe knowledge should be free). But at least you are never required to ask for permission, or pay, to learn from something you have accessed (paid access or not). Once your senses perceive something, you are absolutely free to learn (or not!) from it, and no one can stop you. And gosh, I’m thankful that capitalism hasn’t yet destroyed that for us. Please, don’t lay the foundations for that to happen…

Second:

I know that people pushing these proposals are doing so to protect artists, and that’s a super noble goal. But requiring “permission/payment to learn” is absolutely not going to protect or help artists. Really. How do I know? Because as a society we already tried that solution, several times in different contexts, and it always failed astronomically.

We wanted to protect small inventors from big corporations using their inventions and benefiting from them for free. So we designed patents: “permission/payment to use your invention”. The result? Big corporations literally hoard thousands of patents and then use them to destroy smaller competitors. You have your patent and want to sue me? Good luck! I have an army of lawyers and 2896 patents which I’m pretty sure contain enough to sue you 10 times over.

We wanted to protect artists from big corporations using their art for free and benefiting from it (sounds familiar??). So we designed copyright: “permission/payment to use your art”. The result? Big corporations abuse their position to hoard legal ownership of art, which nowadays is almost never owned by the artists themselves, and then use that to squeeze all the money they can from anyone trying to share said art. Corporations make millions, the artists get pennies, the public is constantly fighting for fair use while corporations try to paywall or silence any kind of use, etc.

So, what do you think will happen if, to protect artists from corporations using their art to train models for free, we try to design some “permission/payment to train” process? Hint: corporations have lots of money, lots of lawyers, and lots of art ownership rights. Artists and the rest of us, don’t. It’s almost as if solving problems by “requiring money to do X”, always ends up benefiting those guys with lots of money instead.

4. The powerful getting more powerful

/images/art-killer-ai/concentration.thumbnail.png

Finally, there’s one last fear that I deeply share, because it’s my main fear with AI in general, not just for content generation: power keeps getting concentrated.

These AI models require a lot of money to train. The datasets are huge, the training process is incredibly consuming. Not to mention the work of so many (absolutely not cheap) specialists who do the research, and then productivize the models. So only big players are able to train the best models. Only big corporations or organizations can afford to do so. And the trend, while sometimes reversing a tiny bit, in general points towards that problem getting worse with every new generation of AI models: bigger models, more data, more money.

And at the same time, no one can deny the power of having these tools. From the new business models they enable, to the incredible advantages they might provide over traditional competitors. And then there are the darker use cases, like weaponizing realistically-sounding fake news generation.

Those two factors combined (cost required vs power gained), mean that we, as a society, continue walking a path in which the powerful are getting even more powerful, while the rest of us have less and less chances of reclaiming that power. Those who can train these models, will have a greater edge over the rest of society. And those are usually big corporations, of which the rest of society has very little control.

Again, this is not particular to AI content generation. This is a problem with AI in general. But I’ve seen people raising this question in the AI art discussions, and it’s a really fair point. And I don’t have an answer for it :(

Too long, didn’t read

Human art won’t die. But artists selling “utilitary” art should be worried. Requiring permission to train from art is going to backfire like crazy. And we keep giving more and more power to the already powerful, that’s bad.

And yes, of course the images in this post are AI generated ;)

Programar STM32F103C8 en Ubuntu con Arduino IDE

Hacer andar una placa STM32F103C8 (o cualquier STM32 de hecho) con Arduino IDE en Ubuntu no es tan simple, hay muchos tutoriales e info contradictoria en la web.

Esto es lo que en mi caso funcionó, en un Ubuntu 20.04 LTS.

Este tutorial asume que ya tienen instalado Arduino IDE, de una versión relativamente actual.

Paso 1: Agregar soporte de STM32 en Arduino IDE

Lo primero que hay que hacer es instalar los plugins necesarios en Arduino IDE, para que el editor sepa compilar y grabar programas en las placas STM32.

Para ello tenemos que ir al menú "File", opción "Preferences", y en la ventana que se nos abre, clickear el botón que está al lado de Additional Board Manager URLs:

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/menu_preferences.png/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/boton_boards_urls.png

En la segunda ventana que se nos abre debemos agregar la URL de donde Arduino IDE va a bajar toda la información de estas placas. La url es esta:

http://dan.drown.org/stm32duino/package_STM32duino_index.json

Y debe agregarse al final, en un renglón nuevo (si es que ya teníamos otras URLs en esta configuración, como es mi caso), y clickear en "Ok":

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/board_url_agregada.png

Luego debemos decirle a Arduino IDE que se baje desde esa URL la información que necesitamos para controlar nuestra placa STM32. Para ello vamos al menú "Tools", luego "Board", y elegimos la opción "Boards Manager..."

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/menu_boards_manager.png

En la ventana que se nos abre tenemos la posibilidad de filtrar la lista de plugins de Arduino IDE. Tenemos que buscar los que nos interesan, escribiendo "STM32" en la caja de búsqueda. Y de los resultados, tenemos que elegir el que dice "STM32F1xx/GD32F1xx boards". Luego de elegirlo nos va a aparecer un botón de "Install" para ese plugin, al que le hacemos click y esperamos que termine el progreso de la instalación.

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/install_stm32_plugin.png

Cuando termina su instalación, podemos cerrar la ventana del Board Manager.

Paso 2: Elegir nuestra placa y la forma de programarla

Ahora tenemos que decirle a Arduino IDE que queremos trabajar con esa placa nueva que instalamos. Para ello elegimos en el menú "Tools", "Board", "STM32F1 Boards", la opción "Generic STM32F103C series":

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/elegir_board.png

Al hacer eso se nos van a agregar algunas opciones nuevas en el menú "Tools", de las cuales tenemos que elegir dos más:

"Tools", "Variant", "STM32F103C8"

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/elegir_variant.png

"Tools", "Upload Method", "STLink"

/images/programar-stm32f103c8-en-ubuntu-con-arduino-ide/elegir_upload.png

Paso 3: Arreglar cosas que no deberían estar rotas

Con los pasos que hicimos antes, en un mundo ideal y con placas más normales, ya todo queda listo para poder programar, hacer click en Play, y que nuestro código corra en la placa que tenemos conectada a la PC. Pero en el caso de las STM32 esto está roto, y hay que hacer algunos pasos manuales para arreglarlo. Los descubrí probando y renegando, y probablemente esto cambie en el futuro.

En particular: el plugin de Arduino IDE para las STM trae dentro un programita que se usa para flashear la placa, llamado "stlink-tools". Pero el problema es que trae una versión vieja y rota (al menos rota en entornos modernos).

Así que lo primero que vamos a hacer, es eliminar esa versión vieja de ese programita que no sirve de dentro del plugin. Lo hacemos corriendo este comando en una terminal de linux:

rm ~/.arduino15/packages/stm32duino/tools/stm32tools/2021.5.31/linux/stlink/st-*

Si el comando dispara un error diciendo que los archivos no existen, lo más probable es que tengan alguna versión diferente o de Arduino IDE, o del plugin. En ese caso van a necesitar encontrar cuál es la ruta para el caso particular de ustedes (probablemente si es otra versión de Arduino IDE, cambie el nombre del directorio .arduino15, o si es otra versión del plugin, cambie el nombre del directorio 2021.5.31).

Una vez borrados esos archivos, tenemos que instalar una mejor versión de la herramienta "stlink-tools". Hacemos eso con este comando:

sudo apt install stlink-tools

Esperamos a que termine de bajar e instalar, y luego tenemos que hacer un último pasito: indicarle a Arduino IDE que use nuestro nuevo stlink tools. Para ello corremos estos tres comandos. Pero ojo!! La ruta que estamos usando acá tiene que ser la misma que la que usamos antes para borrar el viejo stlink tools del plugin. Si cambiamos algún nombre de directorio en el paso de borrado, acá también deberíamos cambiarlo de la misma forma:

ln -s /usr/bin/st-flash ~/.arduino15/packages/stm32duino/tools/stm32tools/2021.5.31/linux/stlink/
ln -s /usr/bin/st-info ~/.arduino15/packages/stm32duino/tools/stm32tools/2021.5.31/linux/stlink/
ln -s /usr/bin/st-util ~/.arduino15/packages/stm32duino/tools/stm32tools/2021.5.31/linux/stlink/

Listo!

Con estos pasos hechos, ya deberíamos poder hacer un programa de ejemplo en Arduino IDE, y al apretar el botón de play (la flecha hacia la derecha) nuestro programa debería ejecutarse en la placa :)

Un programa de ejemplo que vi usado en muchos de los tutoriales (desconozco su autor original, si no lo linkearía):

const int ledPIN = PC13;

void setup()
{
  pinMode(ledPIN, OUTPUT);
}

void loop()
{
  digitalWrite(ledPIN, HIGH);
  delay(1000);
  digitalWrite(ledPIN, LOW);
  delay(1000);
}

Si todo anduvo bien, al dar play el led verde de nuestra placa STM32F103C8 debería titilar exáctamente cada 1 segundo :)

Separate IO from algorithms

This is an old post I wrote for the Machinalis blog. Machinalis was a company I worked at some years ago, that later on got acquired by Mercado Libre. The old post is no longer online, so I replicated it here to keep it somewhere on the web :)

Separate IO from algorithms

Being able to write clean, easy to maintain code is one of the most important skills a developer should have. And it isn’t an easy task to accomplish. We will often be presented with complex problems in which there is no clear “clean” solution. But at the same time, there are some simple practices that can help a lot in the path to better code.

In this post we will talk about one of those practices: separating IO code from algorithms. It’s not rocket science, and many will probably find this obvious. But experience shows that it’s something too often overlooked, and when that happens, the code tends to become messy quite fast.

A not-so-real example

Let’s start with a hypothetical task (later on we will look at a more real example). It will be something quite simple, but bear in mind I’m using it as a vehicle to present some ideas. In real life I would probably just use collections.Counter and the csv module :)

Imagine we have a .csv file, in which each line has the name of a developer and the language he uses:

Guido Van Rossum,Python
Dennis Ritchie,C
Armin Ronacher,Python
Larry Wall,Perl
...

And we are asked to develop a small program that counts how many developers each language has. It must produce these results via standard output:

Python: 2
Perl: 1
C: 1
...

The code we would write to solve the task could be something like this:

def count_developers(file_path):
    quantities = {}
    with open(file_path) as developers_file:
        for line in developers_file:
            developer, language = line.strip().split(',')
            if language not in quantities:
                quantities[language] = 0
            quantities[language] += 1

    for language, quantity in quantities.items():
        print('{l}: {q}'.format(l=language, q=quantity))

And it works, it gets the job done. Even more: it looks like simple code, clean code.

But it has some not-so-obvious problems:

  • What if we want to write tests for it? That would be a problem: the tests would either have to create a file to use as input, and capture the standard output to check the results, or use lots of complex mocking to avoid the interaction with real files and real standard output.

  • What if at some point we need to count developers from a different source, like a json API response? We would need to create a .csv file just to be able to feed it into this function, even if our input data doesn’t come in a file.

  • What if we need to use the output in a different way instead of showing it to the user via standard output? This function forces the results to be shown in that particular way.

All these issues have the same root: our code is doing two things at the same time, that should be separated. Our program is dealing with the IO logic (reading files, showing results) and the algorithms itself (the “business logic”) in a single piece of code.

In this simple example it would be trivial to refactor the code to solve any of those issues. But that kind of refactors (changes to the input and output formats of a piece of business logic) tends not to be so trivial in real life code.

A better approach is then to follow that simple rule we mentioned in the beginning: to separate IO code from algorithms. Following that rule, our solution would look more like this:

def read_developers_file(file_path):
    with open(file_path) as developers_file:
        return [line.strip().split(',')
                for line in developers_file]

def count_developers(developers):
    quantities = {}
    for developer, language in developers:
        if language not in quantities:
            quantities[language] = 0
        quantities[language] += 1
    return quantities

def show_report(quantities):
    for language, quantity in quantities.items():
        print('{l}: {q}'.format(l=language, q=quantity))

In this new solution, we clearly divided our code in three blocks: the code dealing with the input file, the counting algorithm itself (business logic), and the code dealing with the output of the results. We can easily test the business logic without mocking or doing real IO. We can easily reuse the business logic in scenarios where the input or output formats are different. Even if we have to support input data coming from a stream, something quite difficult with the previous approach, we could achieve that with simple refactors. This separation leaves the door open for changes in a way the old code didn’t.

A real example

A very common scenario in which this rule is neglected, leading to really ugly code, slow and complex tests, and overall difficult to maintain code, are Django views. Developers too often write much of the business logic of their web apps right into the views. At first sight this doesn’t look “that bad”, the code is clean, simple. It’s just a view doing business stuff. But as we saw before, problems start to arise when we need to write tests, or reuse that business logic in slightly different scenarios.

When writing the tests, people usually just rely on the django.test.client to solve the “I need to do IO to test this logic” issue. The test client is great, it really solves the need of having to test a view. But the problem is: we shouldn’t be testing a view, when we just need to test a piece of business logic. We are doing lots of unnecessary extra work (url resolving, middlewares, etc), and complicating the test code, when it could have been just a function call.

And as you can imagine, things get really messy when we need to reuse that business logic that’s buried inside the view.

So, instead of writing views like this:

def update_score(request, username):
    # logic to get the current score
    # logic to get the matches won
    # score = a little extra code calculating the new score
    # some more score updating
    # the last bits of the score update
    returnrender(request, 'score.html', {'score': score})

We should always try to write views more similar to this:

import score_logic

def update_score(request, username):
    score = score_logic.update_score(username)
    returnrender(request, 'score.html', {'score': score})

Conclusion

Separating IO from algorithms might sound like an obvious advice, but it isn’t, it’s a principle that is often overlooked. And specially in web apps, leading to test suites that take too much time to run, and code that is indeed very hard to maintain.

It’s a simple rule, easy to follow, and it does prevent serious maintainability problems. So this is my advice: never again miss a chance to separate that function (or view) into dedicated IO and algorithms blocks. Your future self will be thankful :)

Qué tan especiales son las katanas?

/images/katanas-especiales/katana.thumbnail.jpg

Hay varios aspectos posibles para responder: podemos ver qué tiene de especial desde lo físico en comparación a otras espadas, desde lo técnico (su uso) y podemos ver qué tiene de especial desde lo cultural. Voy a responder con detalle sobre el primer aspecto, un poco menos del segundo, y muy poco del tercero. Pero spoiler alert: la katana no es una espada super especial, como mucha gente quiere creer. Lamentablemente hay mucha desinformación al respecto :)

Lo físico

Mucha gente tiene la idea de que las katanas son espadas especialmente livianas y ágiles, o increíblemente filosas, o de un metal extraordinariamente resistente o bueno, en comparación a las espadas del resto del mundo, o especialmente de la Europa medieval y del renacimiento. Lamentablemente la mayoría de estas ideas se arraigan en mitos, perpetrados por juegos, películas, y hasta instructores de artes marciales muy poco informados.

Peso:

Respecto a su peso y agilidad: la katana típica tenía una hoja de unos 70cm, y pesaba alrededor de 1.1 o 1.2 kg. Obviamente hay mucha variación en estos números, pero son los valores más "normales" en katanas históricas.

Si queremos comparar eso con espadas Europeas nos encontramos con un problema: la katana es una espada de dos manos, sonaría lógico compararla contra espadas de dos manos, pero para los estándares europeos la katana tiene una hoja muy corta, del largo de una espada de una mano. Así que comparemos con ambos grupos:

Comparando contra espadas europeas con hojas de tamaño similar, es decir, espadas de una mano, podemos ver que la katana pesa lo mismo que una espada europea típica de ese largo. Las espadas medievales de hojas de 70cm pesaban también alrededor de 1.1 o 1.2 kg. También encontramos variación, pero de vuelta, esos son los valores normales. Así que la katana no tiene ningún tipo de construcción super liviana: pesa literalmente lo mismo que espadas europeas del mismo largo.

Comparando contra espadas europeas de dos manos notamos que en promedio las katanas son más livianas. Pero es una comparación injusta porque la típica espada europea de dos manos tenía una hoja de 90cm, y no 70cm como la katana. Es decir, casi un 30% más largas. Cuánto pesaba la típica espada de dos manos europea? Alrededor de 1.5kg (de vuelta, con variación). Lo que es perfectamente lógico! Para una espada con hoja un 30% más largo, es esperable que el peso sea un poco mayor, pero no por estar mal hecha o ser más "primitiva", nada de eso. Simplemente es una espada más larga, así que va a poseer más metal.

Respecto al balance y distribución del peso, también podemos ver que la hoja de la katana es muy similar a espadas europeas de uso comparable. Base más ancha con un poco de distal taper (afinamiento), con un balance alrededor de 10cm de la guarda.

Con esta información está bastante claro que la katana no es un arma más ágil o rápida que cualquier espada europea similar. Es simplemente una espada más.

/images/katanas-especiales/largos.thumbnail.jpeg

(comparemos el largo de la hoja de la katana, respecto a otras espadas europeas. Incluso el sable victoriano de una mano tiene una hoja más larga)

Filo:

Un aspecto interesante de la katana es que por su construcción, el filo de la espada está compuesto de un acero más duro que el que se usa para su espina. Esto tiene algunas ventajas respecto al filo, pero no son las ventajas que la gente suele imaginar.

Cualquier acero, por más blando que sea, se puede afilar hasta ser comparable un bisturí. El problema de un acero blando no es que no pueda ser filoso, sino que ese filo se va a arruinar mucho más rápido con el uso. Tener un acero muy duro en el filo, permite que ese filo dure más, que requiera menos mantenimiento.

En comparación a espadas europeas, el filo de las katanas suele ser un poco más duro. Ergo, suele mantenerse afilada un poco más de tiempo. Pero no es una diferencia abismal tampoco.

Y también esto trae problemas: un filo levemente más duro, es también un filo levemente más propenso a quebrarse. Es más probable que el filo de una katana se quiebre y se le hagan muescas, que son muy difíciles de arreglar, en golpes que una espada europea por lo general resistiría un poco mejor.

Y hay otro aspecto super importante a la hora de evaluar qué tan bien corta una espada: el espesor de la hoja. Una hoja más fina tiene mucha mejor capacidad de corte que una hoja gruesa, por la resistencia que debe vencer al entrar en el blanco. Y en este aspecto la katana es problemática: la hoja típica de una katana es muy gruesa en términos de espadas. Hay espadas de otras culturas, como las tulwars de la India, que son muchísimo más finas, y en la práctica por ende son muchísimo mejores cortando que lo que es una katana.

Calidad o resistencia del metal:

El mito de que el acero de las katanas es una especie de super acero increíblemente bueno, tiene pocas bases en la realidad. Lo cierto es que la materia prima que se conseguía en el Japón feudal tenía muchas inclusiones de materiales no deseados (similar a muchas partes del mundo), y el proceso de plegado tiene como objetivo remover parte de esas inclusiones, y distribuir más homogéneamente las que no se quitan. El resultado es un buen acero, sin dudas. Pero no un acero extraordinario. Cualquier acero moderno es muchísimo más puro, y el tratamiento térmico preciso moderno logra prestaciones y resistencias muchísimo mejores que las técnicas históricas de cualquier cultura, incluido Japón.

Mucha gente cree que ese proceso de plegado era una especie de tecnología super avanzada exclusiva de Japón, que el resto del mundo no conocía y por eso generaban aceros inferiores. Nada más equivocado: en Europa se realizaban técnicas similares (plegado, y otras incluso más complejas, como el forjado en patrón de espadas de la era vikinga). Pero con el tiempo en Europa se desarrollaron hornos que lograban aceros más puros, haciendo menos necesario el proceso de plegado. Era algo común, y que en un punto se volvió obsoleto.

/images/katanas-especiales/forjado-patron.thumbnail.jpg

(forjado en patrón, técnica similar al plegado pero de mayor complejidad, muy presente en espadas vikingas)

Construcción:

En esto las katanas tienen una característica conocida por muchos, que es poco común: la diferencia en el acero del filo vs de al espina de la espada. Esto se logra utilizando aceros de diferente grado de carbono, y también realizando un templado diferenciado de la hoja (templando más el filo que la espina).

Esto permite lidiar con los problemas de las impurezas, generando un muy buen filo bastante duro, pero montado en una espina con un más capacidad para ceder y no quebrarse tan fácilmente.

Pero no es una característica exclusiva de las katanas. En Europa también existen ejemplos históricos de construcción similar: una espina central de acero más blando, con filos de acero más duro.

Curvatura:

Un detalle pero vale la pena aclararlo: en Europa también se usaban espadas curvas, y mucho. No se trata de una innovación exclusiva de Japón, o de algo objetivamente superior que solo ellos hacían.

Espada recta vs espada curva es un debate que tiene cientos, si no miles de años, y en todas las culturas se dio variedad de opinión al respecto. El contexto muchas veces hizo que una u otra sean más adecuadas en diferentes lugares y épocas. En Europa, las armaduras dictaron una tendencia hacia las espadas rectas (que logran mejores estocadas, y cortes levemente inferiores), simplemente porque cortar hacia una armadura no logra nada.

/images/katanas-especiales/kriegsmesser.thumbnail.jpg

(kriegsmesser europeo)

Protección:

Otro aspecto relativamente interesante es la elección del estilo de guarda que tienen las katanas: un disco pequeño. Es discutible, pero por lo general se considera que ofrece un nivel mucho menor de protección en comparación a otros estilos de guardas, como la cruciforme o alternativas más elaboradas.

Pero más allá de la opinión que se tenga respecto a su utilidad, el tener una guarda con forma de disco no es una característica exclusiva de las katanas. Hay otros tipos de espadas asiáticas con guardas similares (como los Dao chinos).

/images/katanas-especiales/dao.thumbnail.jpg

(Dao chino)

Rigidez:

Un último aspecto interesante es que en comparación a las espadas europeas, la katana suele tener una hoja un tanto más rígida, menos elástica. Esto tiene también sus ventajas y problemas.

La principal ventaja es que hace más fácil cortar, es un poco más permisiva si la técnica del atacante no es tan buena. Las hojas europeas requieren mejor alineación del filo para cortar adecuadamente, mientras que con una katana se puede tener el filo un poco peor alineado (por falla de la técnica del usuario), pero igual cortar de forma efectiva. Esto la hace bastante amigable para novatos. Pero un cortador experto puede cortar igual de bien con ambas.

Claramente también ayuda un poco en las estocadas, ya que la flexión es energía perdida.

La principal desventaja es que la elasticidad de las espadas europeas las hace más duraderas. Un impacto muy fuerte lateral flexiona la hoja, pero la misma vuelve a su forma original, como un resorte. En el caso de la katana, por cómo son tratados los aceros que la componen, tiene más tendencia a doblarse y permanecer doblada en lugar de volver a su forma original.

/images/katanas-especiales/flexibilidad.thumbnail.jpg

Lo técnico

Mucha gente tiene la idea de que en Japón los samurais dedicaban su vida a estudiar el combate con espada, desarrollando técnicas super avanzadas, mientras que en Europa la gente se pegaba garrotasos a lo bruto, sin técnica alguna.

La realidad, de vuelta, no es esa. En cambio en ambos lugares existieron y se desarrollaron artes marciales muy avanzadas de combate con espada. Sistemas completos con principios, técnicas, variaciones de estilo, etc.

Simplemente que en la cultura popular, las artes marciales asiáticas se difundieron y modernizaron muchísimo, mientras que las europeas fueron prácticamente olvidadas.

Pero hoy en día existe mucha gente practicando artes marciales históricas europeas (HEMA), utilizando manuscritos medievales y del renacimiento escritos por los mismos maestros que enseñaban en esas épocas, para revivir dichas artes.

Y lo interesante es que hay muchísimo en común entre ambos mundos. Hay técnicas y posturas que pueden encontrarse casi idénticas entre manuscritos japoneses e italianos. Los principios que guían ambas artes también son muy muy similares.

En definitiva, no existen tantas formas diferentes de usar una espada de dos manos y poco más de 1kg de manera efectiva. Quienes usaban técnicas efectivas sobrevivían, quienes no, morían. Y por ello ambas culturas luego de siglos de uso, llegaron a conclusiones relativamente similares.

/images/katanas-especiales/fiore.thumbnail.jpg

(página del manuscrito "Il Fior di Battaglia", escrito en el 1400. Versión completa online: Fior di Battaglia (MS Ludwig XV 13) )

Lo cultural

En esto puede haber una diferencia muy importante entre la katana y las espadas de Europa (no hablo de otras culturas por que no conozco tanto).

En Europa la espada siempre tuvo un poco de "mística", pero no al nivel de religiosidad que adquirieron las katanas en Japón.

Si en Europa alguien pensaba que podía hacer una espada con mejor diseño, más efectiva, la hacía y nadie lo iba a mirar como un "irrespetuoso de la cultura de la espada". Eso permitió muchísima más variación de formatos y estilos que lo que vemos en Japón. No es que en Japón las katanas no variaban, pero son muy pequeñas variaciones en comparación. Una hoja un poco más larga, una curva un poco menos pronunciada, una guarda un poco más amplia, más o menos mesas en la hoja, etc. Mientras que en Europa en el mismo tiempo, se pasó de guardas de cruz a guardas que cubrían toda la mano, al mismo tiempo había hojas curvas y rectas, hojas con filos paralelos o hojas super triangulares, muy anchas y muy angostas, cruces y pomos de decenas de formatos, etc.

Esto también llevó a que Japón valore y conserve muchísimo mejor sus espadas históricas, y que se desarrollen incluso artes de conservación alrededor de ellas. Una hoja del año 1500 puede verse hoy en perfectas condiciones, mantenida por generaciones de pulidores, y familias que las atesoraron con valor hasta religioso.

En Europa en cambio, la mayoría de las que sobrevivieron se encuentran super maltratadas, mal mantenidas, oxidadas, etc. Y por lo general, luego de años de estar tiradas o escondidas en algún lugar, sin que se les preste demasiada atención.

Esto contribuyó mucho al mito de que las katanas eran especialmente bien hechas, por culpa de ver hojas en estado casi perfecto de un lado, mientras que del otro solo veíamos cosas oxidadas y en estado de abandono.

/images/katanas-especiales/conservacion_katana.thumbnail.jpeg /images/katanas-especiales/conservacion_europea.thumbnail.jpeg

(estado típico de conservación de una katana vs el estado típico de conservación de una espada europea)

En resumen

La katana es un arma con un lugar muy especial dentro del mundo actual de espadas, pero más por accidentes históricos y diferencias culturales, que por las características del arma en sí o de sus técnicas asociadas.

Encrypt a dir with Ecryptfs

This is a simple tutorial on how to have an encrypted directory in your Linux, by using Ecryptfs.

There are many alternatives to do the same, and I'm not qualified to do a deep comparison of them. But I've been using this solution for many years already, and it works like a charm. It relies on command line tools only which are easy to automate. And because it uses a mounted filesystem, it's pretty transparent to other tools (like backups, editors, etc). There's absolutely no need for specialized tools to work with your encrypted files.

Dependencies

sudo apt install ecryptfs-utils

Initial setup (one time only)

You need to do this only once to setup your dir. After that you won't have to run these steps in your daily use.

First create two dirs:

  • A dir where you will work with your files when the encrypted filesystem is mounted.

  • A dir where the encrypted data will live. You should never edit its contents by yourself.

mkdir super_secret_things
mkdir super_secret_things_encrypted

Then, mount the filesystem for the first time:

sudo mount super_secret_things_encrypted super_secret_things -t ecryptfs

This will ask you a bunch of questions, these are the answers I recommend, plus some explanations:

  • Passphrase: whatever you want to use as password :)

  • Cipher: aes (option 1. This is the algorithm used to encrypt. Aes is pretty good)

  • Key bytes: 32 (option 2. The size of the key, bigger usually means harder to hack)

  • Plaintext passthrough: no (this allows for non-encrypted files to be used along encrypted ones, which I think is a bad idea. These are your super secret stuff, you need to be sure they are always encrypted).

  • Filename encryption: yes (fairly obvious. If the answer is no, people can see the names of the files without having to know the encryption password)

  • Signature confirmation: just press enter.

  • A confirmation asking whether you are sure you typed everything right, because this is the first time ecryptfs sees you using this combination of parameters with this directory. Just say yes, because doh, of course it's the first time.

  • And Finally, whether you want ecryptfs to remember the answer to the previous question, so it doesn't ask you "are you sure? this is the first time..." every time you mount the encrypted dir. Answer yes.

You will need to remember the answers you chose, to be able to re-mount the encrypted dir in the future! For me this became muscle memory: "1 2 n y enter".

And that's it! Encrypted dir created!

Daily use

Whenever you need to work with your encrypted files, the steps are this:

  1. Mount the encrypted dir

sudo mount super_secret_things_encrypted super_secret_things -t ecryptfs
# answer "1 2 n y enter", or whatever you chose instead

You can also avoid having to answer all the questions, by passing all these extra parameters:

sudo mount super_secret_things_encrypted super_secret_things -t ecryptfs -o ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=n,ecryptfs_enable_filename_crypto=y,ecryptfs_fnek_sig=6b8de1a1e22ae45c
# (the "encryptfs_fnek_sig" signature is the one that the mount command asks you to verify in the final step, when not receiving all the extra params)
  1. Work with your files inside super_secret_things (REMEMBER!!! never edit the contents of super_secret_things_encrypted by yourself)

  2. Un-mount the encrypted dir

sudo umount super_secret_things

Of course, you can automate these into scripts, alias in your shell, etc.

Hope this is as useful to you as it was for me :)

Longsword by Leonardo Daneluz, my best sword!

The sword

(click the photos to see the full resolution versions)

/images/daneluz-longsword-best-sword/1.thumbnail.jpg

It's a longsword made in the typical style of the year 1400, with a special reinforced tip (useful to fight against armor), and an actual medieval coin in the pommel.

Weight: 1.515 kg.

Length: 119.5 cm (118.5 cm not counting the pin block), 93 cm blade.

Width: 5.55 cm at the base of the blade.Thickness: 6 mm at the base of the blade, then tapers to 3.75 mm at the thinnest point (near the center of percussion), but then thickens again to 5 mm near the tip.

Blade steel: quenched and tempered SAE 5160.

Guard steel: quenched and tempered AISI 1045.

Grip: made of really hard wood (Lapacho), with both hexagonal (near the guard) and oval (near the pommel) cross sections. Covered in string to give it the texture, resin, and then cow leather above it.

Coin: the coin in the pommel is an actual coin from the year 1180, with an image of Manuel I Komnenos, a Byzantine emperor.

Maker: Leonardo Daneluz. More of his high quality work in his facebook page.

/images/daneluz-longsword-best-sword/2.thumbnail.jpg

Gosh, I'm really happy with this sword :D

/images/daneluz-longsword-best-sword/3.thumbnail.jpg /images/daneluz-longsword-best-sword/4.thumbnail.jpg

With "Il Fior Di Battaglia", a medieval martial arts manuscript from around 1400 (you can read it online!). People from that time didn't just crudely bash themselves with blunt heavy swords. In reality, swords were fairly light and agile, quite sharp, and they had very developed martial arts around their usage. Luckily many manuscripts from the time survive, which we can use to revive those martial arts (google "HEMA" if you want to see people practicing them).

/images/daneluz-longsword-best-sword/5.thumbnail.jpg /images/daneluz-longsword-best-sword/6.thumbnail.jpg

Here you can see the pin, which is the end of the tang (the part of the blade that continues inside the grip). And also a glimpse of the coin (better pictures below).

/images/daneluz-longsword-best-sword/7.thumbnail.jpg

The details of the coin. A little shiny because of the oil used to keep the sword from rusting. You can see the figure of Manuel I Komnenos.

/images/daneluz-longsword-best-sword/8.thumbnail.jpg

With a rondel dagger, a type of dagger designed to fight people in armor, usually carried by knights and man-at-arms along this kind of swords.

/images/daneluz-longsword-best-sword/9.thumbnail.jpg

Scolari is a HEMA study/practice group that we founded in Argentina. We focus on the teachings of Fiore dei Liberi, the medieval martial arts master that wrote the manuscript you saw before. https://www.facebook.com/scolariesgrimahistorica/

/images/daneluz-longsword-best-sword/10.thumbnail.jpg

A better picture of the coin, before it was embedded in the pommel.

/images/daneluz-longsword-best-sword/11.thumbnail.jpg

The sword without the grip and coin. You can see how the blade goes all the way through, sticking out of the pommel. This is how historical swords were built.

/images/daneluz-longsword-best-sword/12.thumbnail.jpg

To give it its final texture, the artisan not only uses string below the leather, but also above it. After a short while, the top string is removed, and the leather keeps its texture, providing better grip but also a nicer finish.

/images/daneluz-longsword-best-sword/13.thumbnail.jpg

That's a heck of a thick point.

/images/daneluz-longsword-best-sword/14.thumbnail.jpg

Leonardo quenching it! :)

He's well known for the quality and historicity of his work. If you like medieval swords, then you should definitely check his swords.

Hope you enjoyed it as much as I do. Bah, that's impossible :p

(Thanks Ruth Teller for most of the photos!)

The simplest Virtualenv tutorial (Python 3)

Python virtualenvs allow you to have isolated environments, in which you can install python libs and run your programs. This is useful when you have different projects with different requirements, and also to avoid installing python libs at system level.

This is how to use them in modern versions of python.

What do I need?

Python 3.3 or newer (older versions do have virtualenvs, but they are used in a slightly different way).

How do I create a new virtualenv?

Open a terminal, and run this:

python3 -m venv PATH_TO_YOUR_VIRTUALENV

The path it receives as a paramenter, is the location for a folder that will be created and will contain your virtualenv inside. It will only contain the virtualenv, don't add any files inside that folder. Treat it like a "system" folder.

Example:

python3 -m venv /home/fisa/projects/my_blog/venv

(If you are using Windows, the path would instead look something like this: C:\Users\fisa\projects\my_blog\venv)

How do I use the virtualenv?

Each time you open a new terminal (console) to work in your project, you need to activate the virtualenv. The command to activate the virtualenv is different for Linux/MacOS vs Windows.

On Linux and MacOS, run:

source PATH_TO_YOUR_VIRTUALENV/bin/activate

(if you are using fish shell, replace activate with activate.fish at the end of that command)

On Windows:

PATH_TO_YOUR_VIRTUALENV\Scripts\activate.bat

From now on, the prompt of the terminal should say something like (venv) at the begining (with the name of your virtualenv). This indicates that you are working inside your virtualenv.

With your virtualenv activated, if you now install libs with pip (example: pip install pandas), they will be installed inside the virtualenv. If you run a program inside that terminal, it will be able to import any libs installed in the virtualenv.

That's it?

Yep.

Well, there's more to it, but this is what you need to start using virtualenvs :)

Ok, but...

  • ... how do I deactivate the virtualenv? Just close that terminal. Or run deactivate.

  • ... how do I delete the virtualenv? Just delete the folder. Nothing else is created anywhere else.

  • ... can I move the virtualenv? No. Just delete it, and create a new one in the new location. Virtualenvs are disposable, don't get attached to them :) (your project should define its dependencies either in a requirements.txt or in a setup.py, so you can easily install all the dependencies at once in the new environment)

Using Keras and TensorFlow with Nvidia gpus under Ubuntu

Using Keras and TensorFlow with Nvidia gpus under Ubuntu

What are all these things???

  • Keras: the Python library that knows how to build and train artificial neural networks.

  • TensorFlow: the Python library that knows how to do heavy computations both under cpus and gpus, used by Keras.

  • CUDA + cuDNN: Nvidia utilities to be able to run general purpose computations in the gpu.

  • Graphics drivers: drivers that allow your linux to access and use the graphics card.

Graphics drivers

It might be possible to use CUDA without having the graphics drivers installed, but I'm not sure how easy and stable it is. So my recommendation is to install them first, and verify that they are working.

Usually it's just installing a package with apt:

sudo apt install nvidia-375

But if you are using an Optimus-enabled graphics card (most laptops with Nvidia cards previous to the 10xx generation), you might need to install the nvidia-prime package too.

The recommended version might be higher in the future, 375 is the one I'm using right now under Ubuntu 16.10.

CUDA installation

Get both the CUDA installer and the cuDNN installer, from their official websites: https://developer.nvidia.com/cuda-downloads and https://developer.nvidia.com/cudnn (you will need to register in the website and fill a survey to be able to download cuDNN).

The versions you need to get depend on which versions does TensorFlow support. You can check this in the official website, at https://www.tensorflow.org/install/install_linux .

Once you have both installers, first run the cuda installer (replace the name of the file with the one you got):

sudo sh ./cuda_8.0.61_375.26_linux.run --override

It will ask you a few things. Tell "no" to the installation of graphics drivers (you should already have them), and "yes" to the creation of the symbolic link.

Then uncompress the cuDNN installer (a file with a name similar to cudnn-8.0-linux-x64-v5.1.tgz), and copy its files into the /usr/local/cuda-8.0 folder (you should have it from the CUDA installation). The tar file contains subfolders, be sure to copy the files into the same subfolders in the destination.

TensorFlow and Keras installation

Once you have the graphics drivers and CUDA, then it's easy to install TensorFlow and Keras, they are just Python packages:

pip install tensorflow-gpu keras

If you aren't using virtualenvs (you should!), you should add --user to that command. I don't recommend installing the packages system-wide with sudo, as with time you will probably need different versions of both tensorflow and keras for different projects (they both evolve quite quickly).

Running the code

Finally, when running your code you may need to define the LD_LIBRARY_PATH environment variable, for TensorFlow to be able to find the needed CUDA libraries:

LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64 python your_awesome_code.py

This is true also for Jupyter notebooks:

LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64 jupyter notebook

If you are unsure if your code is actually using your gpu, you can paste this snippet into a test_device.py file:

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

And then run it:

LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64 python test_device.py

It should print a lot of information, but in between you should see something with your graphics card name, like name: GeForce GTX 1070.

Customer and code quality

I hear this a lot in the software development community: "the customer doesn't care about the quality of the code, just about features, usability, user experience, etc.". It has many variations, but the idea remains the same: code quality isn't a factor in the decisions of the customer (internal or external).

Sometimes it's even used to justify writing, or not refactoring, bad code. Because the main thing is to give your customer some value, and code quality just doesn't do that.

I currently believe this is false. I believe that customers do care about the quality of the code, even if they don't understand what code is or how does it work.

Cars

When buying a car, would you ask how are the wheels attached to the rest of the car? You probably wouldn't, you don't care about that. But what if car X had wheels so special that each time you must change a tire, you need to spend 10.000$? Suddenly you care about wheel attachment systems. You would even feel cheated if someone sold you this car without warning you about its wheels.

Well, no, you don't care about wheel attachment systems specifically. But you don't want a car that costs lots of money each time it requires some maintainance. You care about maintainance costs.

Software

You probably get my point already.

Software, code, will need maintainance in most projects. And we know that one of the major factors defining the time (money) required to do code maintainance, is how good it is. Bad code costs more to maintain because it tends to have more bugs, to spawn new bugs more easily when modifying it, to be harder to read/understand how to change it, etc.

So the customer might have zero idea regarding what code is. But they still care, they want software with reasonable maintainance costs and that means code that is decent enough. And giving them bad code without warning them of its impacts in maintainance, is a way of cheating them.

Finally, customers tend to be unable to clearly see this relation by themselves, and developers usually don't want to tell them "this is taking too long to modify because we gave you bad quality code". This may be the reason we got the idea that customers don't care about code, developers just usually fail to explain (or even hide) to them the strong relation between code quality and maintainance costs. So they never come back asking for "better code".

/images/customer-and-code-quality/tech_debt.thumbnail.jpeg

(comic by Vincent Déniel)