Regarding hand uniqueness: If chording keyboards actually became popular, due to widespread VR or AR use, I like to imagine a future where you take a couple photos of your hand, and later that day a drone delivers a custom 3D printed chording keyboard perfectly molded to your hand and even your pinky length.
Of course, we're not there, yet.
There is also the second-order complication that the user IS expected to develop their hand muscles through using the keyboard. The positions that are initially uncomfortable or even impossible for a user to hit can become at least usable for low frequency characters.
So there's a balance, right? I would hope that asking a few personalization questions would be enough to capture the majority of hand shape differences (the 80/20 rule might apply) and actual preferences (i.e. Tabspace wants tab on the home row for help with autocomplete). A preference map of every position seems like overkill, but maybe it would be useful in discovering those personalization questions.
As for chord transitions, I don't know. I'm just throwing out ideas, now, but maybe a bigram language model could be used as a tie-breaker in deciding between two closely scored layouts (this would require some "difficulty" or distance metric between two chords). That might start to "overfit" the configuration to the sample texts, though. This might not be a problem with a big enough corpus, though.
Even if the process isn't fully automated, parts of a tool like this could at least be useful in generating and evaluating layouts, as compared to doing it totally "by hand."