@rauh: that shouldn't throw - what version of Immutant is this?
@tcrawley: 2.0.2
can you gist lein deps :tree
for me?
and the full stacktrace?
@tcrawley: https://gist.github.com/rauhs/3f1368f60d282ec7504d
do you explicitly have org.immutant/wildfly
in your dependencies? that should only be loaded when you are in-container, and is added to the war automatically by lein immutant war
. It expects classes from the container to be available, so will throw if loaded outside the container
in-cluster?
tries to load immutant.wildfly
if it is available so it can call immutant.wildfly/in-cluster?
, but will return false if that ns isn't available
Yes I do. I'll remove it then.
Maybe we should wrap the ns in a guard that prevents it from even trying to load outside of the container
I'll see what that would take
How do I use the namespace immutant.wildfly
then?
I can't require it if I remove the dep
Would I manually have to (when (util/in-cluster?) (require 'immutant.wildfly))
?
are you using functions from immutant.wildfly
?
Not right now, I'm just evaluating and playing around
I have no expierence with Wildfly/jBoss
I just wonder how I'd start multiple deployments and how I can have them configured differently (same app).
you shouldn't ever need to call fns from immutant.wildfly
directly - the useful ones have versions in immutant.util
that wrap them with checks to make sure the ns is available, so you can pretty much safely ignore immutant.wildfly
I see.
it might be tricky to have the exact same war file deployed multiple times with different configs, since there would be no good way for each deployment to know who it is
actually, you may be able to have the deployment figure out the context path it is on (those have to be unique), and it could use that information to load a config file off a known location on disk
let me see if we expose that
Yeah that's what I'm starting to realize. I started up Wildfly management console and there is no way to set any parameters of any kind for a deployment. I though i could set a few config files on the web interface and say "this one is development" and "this one is production" and they listen on different ports
The problem I'm wondering; If i have a new version of my app, and I want zero downtime (thus not shutting down the currently running app), I'd like to start up a new version (on a differnt port or different Vhost) and switch over my front end load balancer. If the error rate goes up or something doesn't work. I switch it back to the old (still running) version of the app
another option is to build a different war for each deployment, using profiles to set the config for each
How is somethign like this done in the Wildfly world?
that's generally done with multiple wildfly instances, I believe
Hmm I see, but they'll start all subsystems like a different messaging queue etc, no?
not if they are clustered, they will then share messaging
Ok I see, I'll def have to do some more reading on this.
Thanks for your help
my pleasure! let me know if you have any other questions/issues