Apple is going to do the full repair or bust. I guarantee that. You can't even process a partial repair like you're asking within Apple's logistic system.
So I have had the opportunity to meet with Apple security engineers a couple times as a function of my job. (FYI, you don't want to know how insecure most medical devices are, oy, makes them wifi lightbulbs look like fort knox). I think I have some insight into this particular design decision.
The CPU and sensor are paired because you don't want to send the sensor data unencrypted across the wire where it's then subject to spoofing/MITM attacks. This sounds dumb but there have been demonstrated attacks on iOS devices by tying directly into the various interconnects. During read from the secure enclave the unit enters a special trusted secure mode that is inaccessible to the kernel and installed software. This is part of Apple's secure boot chain and runtime security.
This is not as in-depth as a presentation I attended a while back but as good as you're likely to get publicly facing.
99.9% of us just say. popsnizzle son, re-pair sensor and device and get on with your life. Well you 99.9% probably had a regular sensor installed but the device can't be 100% sure of this. Maybe the sensor includes some "extras" to try and get it to divulge encrypted information or perform other attacks against the device. It is well known that state sponsored attacks include intercepting devices in transit and installing monitoring hardware/software on them.
So yea we are not talking about your run of the mill attacks but highly sophisitcated, expensive attacks. Think state sponsored CIA NSA KGB sort of attacks. Given Apple's place in the market they are legitimately concerned about these sorts of issues and the protection of user data.
Not replacing the sensor per Apple's guidelines in a controlled situation is a poor decision. I'm 100% behind this requirement in this type of device given the pressures of the modern world.
Now bricking the device with error 53, I'm less enthused about that. I think it's probably appropriate that when the device detects that the sensor may have been compromised TouchID should be disabled. The device could/should fall back to pincode entry and report an error.
I expect that Apple's argument would be in a situation where the sensor/other hardware has been compromised unlocking the device could be bad. If an attack device has been installed instead of a regular sensor once unlocked this device could use code injection or other exploits (the inevitable buffer overrun errors) to attack the device in a decrypted state. Apple has incredibly smart people working for them. I think in this situation Apple has chosen that if the sensor isn't trusted, the boot chain isn't trusted, the security process isn't trusted, the secure enclave isn't trusted, we can't ensure the integrity of this device. In this state malware or other processes could be running and extract user data (perhaps stored for future retrieval).
This a completely legitimate viewpoint to have, and also overkill
for 99% of their user population.
Perhaps a compromise solution would be to sell a device that bricks itself in the case of integrity loss to organizations and users that want/require that level of security and the rest of us mortals could buy a model that simply outputs an error state in the case of the integrity loss, leaving it up to us to make the call. Given Apple's history I don't think this is likely to happen.
That's not to say there isn't a business case for apple wanting repair revenue but I don't think it is anywhere close to a deciding factor for Apple.