If you want to keep your Neural Network architecture secret and still want to use it in an application, would somebody to be able to reverse engineer the Neural Network from the weights file (.h5) only?
The weights are an output of
model.save_weights() and are loaded back into the model with
model.load_weights(). All other application code is properly encrypted in this case.
I would say no.
As an incomplete example: Assume you are given three weight matrices. Even if you are somehow able to guess that they are for simple convolution operations, you would still not know
x, it could be conv(conv(conv(x))), or conv(conv(x)+conv(x)) or many more options
Why not encrypt your weights file as well? You already seem to have a secret key mechanism to encrypt your model