Living dangerously is a bit dangerous for my tastes . I’m not sure what the largest you could possibly send via the current go-bitswap implementation is, but I wouldn’t rely too heavily on it.
I’m assuming there’s a preference for fewer full blocks.
How files are chunked up into blocks is situational and has a number of variables involved such as parallelization of requests, duplicate data received, how many CIDs to advertise (particularly if you want random access into the chunks of your files), data deduplication, and probably a few more that I’m missing.
If you’re really concerned about block sizes I’d do some rough benchmarking with your datasets and use cases to see what makes sense, or see what other folks have done with use cases similar to yours. I’d be wary of exceeding the safe limits without good evidence of how much it’ll help you though since it may not be worth the pain of potential breakage in the future.