this is a feature request for a feature that isn't (as far as i know) implemented in any db tool.
"db structure mining"; what i imagine it to be:
because sql is what it is, tables have to be linked via something. whether it is an id that is controlled appside, or a foreign key that the dbms controls, there is always a link between tables. unfortunately FK's take a huge hit on the dbms so most apps I've seen just use id columns and 'key' them on the application side.
I would like to have the possibility to start from a table (that i suspect contains the starting data - customers, for example) and search 100, 1000 rows worth of data in every other table. after i find the same value in whatever named column in another table, do a search for all the other column's data on that row in every other table and so on.
this basically allows me to 'deduct' the internal date structure of the database and makes my life A LOT easier. even if it will take a while to 'deduct' (i can control that by starting with more or less rows in the starting table).
For the Navigator feature, we already have a request for supporting column name patterns as an alternative to FK/PK declarations when figuring out how tables are related. For instance, it could figure out that a column named CUSTOMER_ID is a reference to the ID column in the CUSTOMER table. The Query Builder supports a variation on this today, namely that columns with equal names are references.
Would something like that satisfy your needs? Blindly trying to match columns by looking at values as you suggest seems like it could be very error prone and time consuming.
the idea is to 'mine' the data and it wouldn't be as error prone as one would think. at most, it would also detect duplicate data in other tables. it would allow better understanding of poorly documented databases. also, it could be looking after %column_value% as well (ex: find where that IBAN code is getting it's currencies from)