Quantcast
Channel: CodeSection,代码区,Python开发技术文章_教程 - CodeSec
Viewing all articles
Browse latest Browse all 9596

Worst/Trickiest code I have ever seen

$
0
0

It’s easy to write bad code, but it takes a real genius to produce truly terrible code. And the guys who wrote the python program hyperopt were clearly very clever.

Have a look at this function: (don’t worry about what it is doing) from tpe.py

# These produce conditional estimators for various prior distributions @adaptive_parzen_sampler('uniform') def ap_uniform_sampler(obs, prior_weight, low, high, size=(), rng=None): prior_mu = 0.5 * (high + low) prior_sigma = 1.0 * (high - low) weights, mus, sigmas = scope.adaptive_parzen_normal(obs, prior_weight, prior_mu, prior_sigma) return scope.GMM1(weights, mus, sigmas, low=low, high=high, q=None, size=size, rng=rng)

The details don’t matter here, but clearly it’s calling some function “adaptive_parzen_normal” which returns three values, then it passes that to another function called “GMM1” and returns the result.

Pretty straight forward? With me so far? Great.

Now here is some code that calls this function:

fn = adaptive_parzen_samplers[node.name] named_args = [[kw, memo[arg]] for (kw, arg) in node.named_args] a_args = [obs_above, prior_weight] + aa a_post = fn(*a_args, **dict(named_args))

Okay this is getting quite messy, but with a bit of thinking we can understand it. It’s just calling the ‘ap_uniform_sampler’ function, whatever that does, but letting us pass in parameters in some funky way.

So a_post is basically whatever “GMM1” returns (which is a list of numbers, fwiw)

Okay, let’s continue!

fn_lpdf = getattr(scope, a_post.name + '_lpdf') a_kwargs = dict([(n, a) for n, a in a_post.named_args if n not in ('rng', 'size')]) above_llik = fn_lpdf(*([b_post] + a_post.pos_args), **a_kwargs)

and that’s it. There’s no more code using a_post.

This took me a whole day to figure out what on earth is going on. But I’ll give you, the reader, a hint. This is not running any algorithm it’s constructing an Abstract Syntax Tree and manipulating it.

If you want, try and see if you can figure out what it’s doing.

Answer:

In fact the return value of GMM1 is not used at all . GMM1 isn’t even run.

So let’s look ap_uniform_sampler again:

# These produce conditional estimators for various prior distributions @adaptive_parzen_sampler('uniform') def ap_uniform_sampler(obs, prior_weight, low, high, size=(), rng=None): prior_mu = 0.5 * (high + low) prior_sigma = 1.0 * (high - low) weights, mus, sigmas = scope.adaptive_parzen_normal(obs, prior_weight, prior_mu, prior_sigma) return scope.GMM1(weights, mus, sigmas, low=low, high=high, q=None, size=size, rng=rng)

So this is return a tree structure that looks like:

Call “GMM1” with parameters: Get first tuple value from result of Call “adaptive_parzen_normal” Get second tuple value from result of Call “adaptive_parzen_normal” Get third tuple value from result of Call “adaptive_parzen_normal” Set named parameter “low” to “low” Set named parameter “high” to “high” Set named parameter “q” to “q” Set named parameter “size” to “size” Set named parameter “rng” to “rng”

adaptive_parzen_normal is run here, but it also returns a tree structure. Note that ‘low’, ‘high’ etc variables are also call graphs themselves.

Now here is the tricky part:

The code doesn’t run this graph, but grabs this literal string “GMM1” and adds “_lpdf” to it, then runs that function, using a different set of samples and the parameters that were supposed to be passed to GMM1, minus the ‘size’ and ‘rng’ parameters. So effectively it’s doing:

# These produce conditional estimators for various prior distributions @adaptive_parzen_sampler('uniform') def ap_uniform_sampler(samples, obs, prior_weight, low, high, size=(), rng=None): prior_mu = 0.5 * (high + low) prior_sigma = 1.0 * (high - low) weights, mus, sigmas = scope.adaptive_parzen_normal(obs, prior_weight, prior_mu, prior_sigma) return scope.GMM1_lpdf(samples, weights, mus, sigmas, low=low, high=high, q=None)

Got that? Good!

Maintainers are usually nervous about changing code that works and is in production for many users, so I’ve made a pull request on github to at least document this:

https://github.com/hyperopt/hyperopt/pull/290

Thoughts? Am I just stupid? Was this code obvious to you? How would you improve this?


Viewing all articles
Browse latest Browse all 9596

Trending Articles