May. 2nd, 2014

import_that: XKCD guy flying with Python (Default)
I have a mild dislike of Python's default prompt, ">>>". Not that the prompt itself is bad, but when you copy from an interactive session and paste into email or a Usenet post, the prompt clashes with the standard > email quote marker. So I've changed my first level prompt to "py>" to distinguish interactive sessions from email quotes. Doing so is very simple:

import sys
sys.ps1 = 'py> '

Note the space at the end.

You can change the second level prompt (by default, "...") by assigning to sys.ps2, but I haven't bothered. Both prompts support arbitrary objects, not just strings, which you can use to implement dynamic prompts similar to those iPython uses. Here's a simple example of numbering the prompts:

class Prompt:
    def __init__(self):
        self.count = 0
    def __str__(self):
        self.count += 1
        return "[%4d] " % self.count

sys.ps1 = Prompt()

If you're trying to use coloured prompts, there are some subtitles to be aware of. You have to escape any non-printing characters. See this bug report for details.

You can have Python automatically use your custom prompt by setting it your startup file. If the environment variable PYTHONSTARTUPFILE is set, Python will run the file named in that environment variable when you start the interactive interpreter. As I am using Linux for my desktop, I have the following line in my .bashrc file to set the environment variable each time I log in:

export PYTHONSTARTUP=/home/steve/python/

and the startup file itself then sets the prompt, as shown above.
import_that: XKCD guy flying with Python (Default)
Last September, there was a fascinating interview with Karen Sandler by Linux Format. Karen Sandler is the Executive Director of the Gnome Foundation, and she spoke about learning just how vulnerable the software controlling medical devices is, and her efforts to be permitted to audit her implanted heart defibrillator's software.

I was so freaked out about this. I kept trying to talk to doctors about it and they wouldn’t listen to me, or they just didn’t know how to handle the conversation with me. I had one electrophysiologist who I talked to who just hung up the phone on me.

You would probably freak out too if you learned that any script kiddie with an iPhone could take control of your pacemaker and deliver a fatal electric shock. But it wasn't until the late, brilliant, Barnaby Jack and University of Massachusetts associative professor Kevin Fu demonstrated how to take remote control over medical implants fitted with wi-fi that people started to take Karen's concerns seriously.

Wireless medical implants that will talk to any device that says hello. What could possibly go wrong?

Karen continues:

I realised that it’s not just my medical device; it’s not just our lives that are relying on this software: it’s our cars, and our voting machines, and our stock markets and now our phones in the way that we communicate with one another. We’re building all this infrastructure, and it’s putting so much trust in the hands of individual corporations, in software that we can’t review and we can’t control. Terrifying.


25% of all medical device recalls in the last few years have been due to software failure.

Karen's argument is that independent, public review of the source code is the best way to guarantee that bugs and security vulnerabilities are found and corrected as quickly as possible. It's not that open source software is necessarily bug-free, but that there is more opportunity for bugs to be found and fixed, and less opportunity for manufacturers to stick their head in the sand and deny there is a problem. Sunlight is the best disinfectant, and openness and transparency are essential for security. Keeping source code secret doesn't make it more secure. If secrecy were all it took, Windows would be free of viruses and malware. In fact, secrecy is often counter-productive:

I used to decry secret security systems as "security by obscurity." I now say it more strongly: "obscurity means insecurity." — Bruce Schneier

When the television series "Homeland" first aired an episode involving a plot to commit assassination by remote-controlling a pacemaker, it was widely derided as being unrealistic. That was until former American Vice-President Dick Chaney publicly acknowledged that the risk of remote exploits was seriously considered when he was fitted for a pacemaker.

Unless we take treat the security of medical devices and other complex systems seriously, it is only a matter of time before somebody is murdered by remote control. In fact, it may even have already happened. No less than former "security czar" Richard Clarke has warned that the death of investigative journalist Michael Hastings mere hours after he wrote to friends that he was going "off the radar" was completely consistent with a remote attack on his car.


import_that: XKCD guy flying with Python (Default)
Steven D'Aprano

May 2015

345678 9

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags