"In the modern artificial intelligence ecosystem, this is what the transformer architecture is. It’s a complex systems which performs a large number of information processing and it is best described in terms of its architecture rather than in terms of its end-directed “computation”. "
Thanks for this interesting article. I am still at the beginning but already have a question: Do you know of interesting examples where the same level 2 transformation of inputs into outputs encodes different computations?
"In the modern artificial intelligence ecosystem, this is what the transformer architecture is. It’s a complex systems which performs a large number of information processing and it is best described in terms of its architecture rather than in terms of its end-directed “computation”. "
I agree, but there are impressive efforts in distilling a "computational level" from the architecture. Do you know about https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html ?
Maybe it is convenient to add a link to Marr's paper "Artificial intelligence—a personal view" ... thanks for pointing it out ... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Artificial+intelligence+%E2%80%94+a+personal+view&btnG=
Another question. Once you add in sth for learning / evolution, it starts looking like Tinbergen's four questions ... https://en.wikipedia.org/wiki/Tinbergen%27s_four_questions ... any comments?
Thanks for this interesting article. I am still at the beginning but already have a question: Do you know of interesting examples where the same level 2 transformation of inputs into outputs encodes different computations?