Hi,
Here we are going to see a WebLogic provided utility to test the Multicast Messages can flow properly in our network or not. These utilities helps us to understand whether there is any N/W issue or not …in out Network.
MultiCastMonitor Utility:
MulticastMontior is a stand-alone Java command line utility that monitors multicast traffic on a specific multicast address and port.
Below is the “MulticastMonitorTest.sh” …Please run this test as well…along with Multicast Test…
WL_HOME="$HOME/bea1032/wlserver_10.3" JAVA_VENDOR="Sun" JAVA_HOME="$HOME/java/jdk1.6.2_05" . ${WL_HOME}/common/bin/commEnv.sh MULTICAST_ADDRESS=239.252.1.6 MULTICAST_PORT=8888 IDLE_TIMEOUT_SECONDS=120 DOMAIN_NAME=Your_DomainName CLUSTER_NAME=YourClusterName #The syntax: #java weblogic.cluster.MulticastMonitor <multicastaddress> <port> <domainname> <clustername> ${JAVA_HOME}/bin/java -classpath ${WEBLOGIC_CLASSPATH} weblogic.cluster.MulticastMonitor ${MULTICAST_ADDRESS} ${MULTICAST_PORT} {DOMAIN_NAME} ${CLUSTER_NAME}
Few Points regarding Multicast Address…always need to be taken care…
1). The multicast address must be an IP address between 224.0.0.0 and 239.255.255.255 or a hostname with an IP address in this range.
2). The default multicast address used by WebLogic Server is 239.192.0.0.
3). Do not use any x.0.0.1 multicast address where x is between 0 and 9, inclusive.
MulticastTest
: The MulticastTest utility helps you debug multicast problems when you configure a WebLogic cluster. The utility sends out multicast packets and returns information about how effectively multicast is working on your network.
http://download.oracle.com/docs/cd/E13222_01/wls/docs100/admin_ref/utils.html#wp1199798
Syntax: java utils.MulticastTest -n name
-a address
[-p portnumber
] [-t timeout
] [-s send
]
Arguments | Meaning |
-n name | Required. A name that identifies the sender of the sequenced messages. Use a different name for each test process you start. |
-a address | The multicast address on which: (a) the sequenced messages should be broadcast; and (b) the servers in the clusters are communicating with each other. (The default is 237.0.0.1.) |
-p portnumber | Optional. The multicast port on which all the servers in the cluster are communicating. (The multicast port is the same as the listen port set for WebLogic Server, which defaults to 7001 if unset.) |
-t timeout | Optional. Idle timeout, in seconds, if no multicast messages are received. If unset, the default is 600 seconds (10 minutes). If a timeout is exceeded, a positive confirmation of the timeout is sent to stdout. |
-s send | Optional. Interval, in seconds, between sends. If unset, the default is 2 seconds. A positive confirmation of each message sent out is sent to stdout. |
Always Remember:
Do NOT run the MulticastTest
utility by specifying the same multicast address (the -a
parameter) as that of a currently running WebLogic Cluster. The utility is intended to verify that multicast is functioning properly in out network or not….So basically it is a N/W related utility. Ususlly this should be done before starting your clustered WebLogic Servers or when we face Multicast related issues in server logs.
Set Cluser Debugs Using weblogic.Admin utility:
java weblogic.Admin -url t3://localhost:7001 – username weblogic -password weblogic SET -type ServerDebug -property DebugCluster true
.
.
Thanks
Jay SenSharma
June 7th, 2010 on 4:13 pm
Hi Jay,
As there is MulticastTest Utility for multicast type clusters, is there something on the same lines for Unicast type clusters?
June 7th, 2010 on 4:21 pm
Hi Vicky,
like “utils.MulticastTest”, there is no Similar utility available for Unicast test…
Actually Multicast or IP multicast is a simple broadcast technology that enables multiple applications to subscribe to a given IP address and port number and listen for messages. The IP/port combination is setup when the cluster is defined and server instances uses multicast for JNDI updates and cluster heartbeats. So using
java utils.MulticastTest -n server1 -a 224.x.x.x –p 9001 We just check wether all the Nodes available in the Cluster is able to communicate or not…It is actually testing at the N/W level…
So if you want to test it then Just test your Servers in Multicast mode …if you see everything is OK at n/w level and servers are able to send heart beat messages to each other without any issue…then later change the Cluster Communication Mode to “Unicast”.
We can max enable the Unicase Debug Flag to analyze any issue related to this:
Unicast Debug Flag: -Dweblogic.debug.DebugUnicastMessaging.
.
Keep Posting 🙂
Thanks
Jay SenSharma
June 7th, 2010 on 4:46 pm
So enabling this flag Dweblogic.debug.DebugUnicastMessaging can provide whether the servers in the unicast cluster are communicating with each other??
June 7th, 2010 on 5:16 pm
Hi Vicky,
Yes. This Flag will display the Debug informations based on Unicast Communication . So u can understand that is there any Issue with Unicast or not.
.
.
Keep Posting 🙂
Thanks
Jay SenSharma
June 7th, 2010 on 5:23 pm
Thanks Jay!!
This is a wonderful site for all weblogic related queries.. Thanks a lot!!
Regards,
Vivek
March 23rd, 2011 on 6:25 am
Hi Jay,
1)Say, When I access the Admin Console its opening very much slowly, But the all server health when monitored using admin console is “OK”.
Can I use MultiCast Utility in this Situation?
2)I have AdminServer, MS-1, MS-2.
MS-1 and MS-2 are clustered and all reside on same physical machine, Can I use MultiCast Utility to Check MS[1-2] communication?
OR is it compulsory that MS-1 n MS-2 are on two different physical machine.
Thanks
sathya
March 23rd, 2011 on 10:32 am
Hi Sathya,
Answer-1). The adminServer shown the Health Of Managed Server OK … It means The Managed Server is running fine and the Admin Server is able to communicate with the Managed Server very well without any issue. But it is possible that Admin Vs MS1 and Admin Vs MS2 communication is very good and there is no n/w issue between them …. But it is possible that Between MS1 vs MS2 there may be some N/W issue so if they are forming any cluster in between then you should use Multicast test.
Answer-2). Multicast test basically test whether All the Cluster members are able to communicate with each other properly or not. This is done by sending and receiving heart beat messages to each other. So it the managed servers are running on the same BOX then basically there is no need to have a Multcast test. Because you can run a Cluster in a Single box without any N/W connectivity. It means Cluster can be formed in a Isolated Box as well..
But if the Managed Servers (Which are part of cluster) are running on remote boxes inside our n/w then we must use Multicasttest to check and validate the Multicast message flow is properly working or not.
.
.
Keep Posting 🙂
Thanks
Jay SenSharma
June 3rd, 2011 on 1:00 pm
Hi Team,
I have AdminSever, MS-1 & MS-2 in the same box (linux).
MS-1 & MS-2 are in a Cluster. When I am starting MS-1 & MS-2 the servers are failed to start and showing the messages: .
Please help me ASAP in resolving the issue.
Thanks in advance.
Regards,
Ashok
June 3rd, 2011 on 3:30 pm
Hi Ashok,
Please let us know the correct version of WebLogic which you are using and Also provide us the Complete Server.log Error Messages. You can paste your errors as mentioned in below format:
http://middlewaremagic.com/weblogic/wp-content/uploads/2009/08/How_to_Post_Comments.jpg
.
.
Keep Posting 🙂
Thanks
Jay SenSharma
September 27th, 2011 on 6:49 pm
Hi Jay,
Thanks for this blog and it is really helpful.
We are using Weblogic 10 MP1 in our product and we are trying to make our application clustered for next release. So far it was running as single instance and for scalability reason we are trying to make it clustered.
When I read weblogic documentation, it is said that for earlier version of the weblogic multicast configuration is recommended for inter node communication but from 10g onwards unicast is a better option compared to multicast. Does this mean, multicast is becoming obsolete from 10g onwards and not a recommended way for inter node communication? What are pros & cons for unicast vs. multicast from 10g onwards?
For our initial deployment we are planning to have 2 managed nodes in cluster and both would run on same physical box as 2 JVMs. For this deployment what would be the best option for inter node communication?
In the next deployment, managed nodes will span across multiple physical boxes. What is recommended here for inter node communication?
Could you please help me in clarifying these doubts?
Regards,
Prakash
September 27th, 2011 on 9:15 pm
Hi Prakash,
Unicast and Multicast both are different modes of communication. From WLS10 onwards Unicast clustering is the default Clustering mode. there are certain specific scenario when you should choose using Unicast clustering.
The following considerations apply when using unicast to handle cluster communications:
1). All members of a cluster must use the same message type. Mixing between multicast and unicast messaging is not allowed.
2). You must use multicast if you need to support previous version of WebLogic Server within your cluster.
3). Individual cluster members cannot override the cluster messaging type.
4). The entire cluster must be shutdown and restarted to message modes.
5). JMS topics configured for multicasting can access WebLogic clusters configured for Unicast because a JMS topic publishes messages on its own multicast address that is independent of the cluster address. However, the following considerations apply:
a). The router hardware configurations that allow unicast clusters may not allow JMS multicast subscribers to work.
b).JMS multicast subscribers need to be in a network hardware configuration that allows multicast accessibility.
Notes:
1). In unicast messaging mode, the default listening port of the server is used if no channel is configured.
2). Cluster members communicate to the group leader when they need to send a broadcast message which is usually the heartbeat message. When the cluster members detect the failure of a group leader, the next oldest member becomes the group leader.
3). The frequency of communication in unicast mode is similar to the frequency of sending messages on multicast port.
4). “Outbound Enabled” must be TRUE for a custom channel to work in Unicast messaging mode
Multicast is based in UDP protocol & Unicast is based on TCP mode of clustering. For more information on it please refer to the following:
http://download.oracle.com/docs/cd/E11035_01/wls100/cluster/features.html
.
.
Keep Posting 🙂
Thanks
Jay SenSharma
September 28th, 2011 on 8:26 pm
Thanks Jay for your response.
I believe JMS topics are alweays configured as multicast. Please let me know if this is the right understanding.
And as stated in my earlier question, we have 2 managed nodes in cluster and each managed node is configured to have it’s own JMS server by specifying two JMS configuration. And each managed node has got same topic & queues published. We set this up 4-5 months back and so far i didn’t see any issue. But today, I saw a strange error in weblogic console and not sure what’s the root cause of this. The topics JNDI gets connected in one managed node and in another node it throws the below error. The topic & queue connection factories, queue & topic names are identical in both managed nodes. Could you please let me know what could be causing this and how do I go about fixing it?
Thanks again for all your help.
<The Message-Driven EJB: SupplierStatusMDB is unable to connect to the JMS destination: topic/supplier/statusTopic. The Error was:
weblogic.jms.common.InvalidClientIDException: Client id, SupplierStatusMDB, is in use. The reason for rejection is "The JNDI name weblogic.jms.connection.clientid.SupplierStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:2):SupplierStatusMDB)”
Nested exception: weblogic.jms.common.InvalidClientIDException: Client id, SupplierStatusMDB, is in use. The reason for rejection is “The JNDI name weblogic.jms.connection.clientid.SupplierStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:2):SupplierStatusMDB)”
Nested exception: weblogic.jms.common.InvalidClientIDException: Client id, SupplierStatusMDB, is in use. The reason for rejection is “The JNDI name weblogic.jms.connection.clientid.SupplierStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:2):SupplierStatusMDB)”>
<The Message-Driven EJB: ClusteringStatusMDB is unable to connect to the JMS destination: topic/clustering/statusTopic. The Error was:
weblogic.jms.common.InvalidClientIDException: Client id, ClusteringStatusMDB, is in use. The reason for rejection is "The JNDI name weblogic.jms.connection.clientid.ClusteringStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:3):ClusteringStatusMDB)”
Nested exception: weblogic.jms.common.InvalidClientIDException: Client id, ClusteringStatusMDB, is in use. The reason for rejection is “The JNDI name weblogic.jms.connection.clientid.ClusteringStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:3):ClusteringStatusMDB)”
Nested exception: weblogic.jms.common.InvalidClientIDException: Client id, ClusteringStatusMDB, is in use. The reason for rejection is “The JNDI name weblogic.jms.connection.clientid.ClusteringStatusMDB was found, and was bound to an object of type weblogic.jms.frontend.FEClientIDSingularAggregatable : FEClientIDSingularAggregatable(SingularAggregatable(:3):ClusteringStatusMDB)”>
Regards,
Prakash
September 29th, 2011 on 10:24 pm
Hi Prakash,
From your description it seems that you might be using a distirebuted queue/topic and When you target an MDB to a cluster, WebLogic Server deploys a copy of the MDB in each WebLogic Server instance now as each copy of the MDB has been deployed to the individual servers is treated as a separate deployment and this might causes a problem because WebLogic JMS thinks that you have just deployed multiple durable subscriptions that are using the same client-id.
However as you said that for somereason the JNDI of the Topic/queue was not persent on one of the managed servers hence that makes weblogic server think that this managed servers is down and the MDB container will try to attempt to reconnect to the other topic/queue but MDB would be using the same client id thats the reason you are getting this issue.
Now to solve this problem you can try using “generate-unique-jms-client-id” element in the “weblogic-ejb-jar.xml” deployment descriptor and setting it as “true”, this will make sure that WLS will generate a unique subscription ID for each managed server’s MDB deployment. More information on this element can be seen in the below link
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/ejb/DDreference-ejb-jar.html#wp1397207
Regards,
Ravish Mody
September 30th, 2011 on 8:30 pm
Thanks Ravish for your response. I debugged this issue further and it is coming due to cluster messaging mode set to multicasting. I don’t see this issue in unicasting though.
This is exactly I was looking for and I believe setting this will solve the problem. This weblogic forum is really helpful and I’m getting lot of help from here.
I believe exception comes only for durable topics and doesn’t come for queues (as I see exceptions only for topics and if i look admin console, all MDBs pointing queues are connected in both managed nodes. Only topic MDBs are connected in only one managed node & in another one it is failing).
I’m trying to put generate-unique-jms-client-id in my MDB annotation but the sytax is really weird and I’ve tried every combination but I don’t see this tag getting generated in the deployment descriptor when I compile the code.
/**
* @ejb:bean name=”ClusteringStatusMDB”
* transaction-type=”Container”
* destination-type=”javax.jms.Topic”
* subscription-durability=”Durable”
* @ejb:transaction type=”NotSupported”
* @weblogic:message-driven destination-jndi-name=”topic/clustering/statusTopic”
* connection-factory-jndi-name=”softface/TopicConnectionFactory”
* generate-unique-jms-client-id True
*/
The above syntax gives error and not valid.
I tried @weblogic:generate-unique-jms-client-id True as well but id doesn’t add this to deployment descriptor.
Can you please help me with syntaxing? I’m really sorry to put this question. The weblogic annotation syntax is really confusing for me & I’m not able to generate-unique-jms-client-id tag in the deployment descriptor.
Regards,
Prakash
September 30th, 2011 on 9:33 pm
Hi Prakash,
I believe below link would help you to correctly put the “generate-unique-jms-client-id” element in the “weblogic-ejb-jar.xml” file as it has an example for it in the Point-3 under “Create MDB Class and Configure Deployment Elements”
Link: Programming and Configuring MDBs: Main Steps
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/ejb/message_beans.html#wp1148491
Regards,
Ravish Mody
October 12th, 2011 on 5:24 pm
Thanks Ravish for your response. I looked at the link you gave (looked at few other blogs as well) and tried couple of things mentioned but still I’m not able generate auto client in ejb jar xml file with annotation support. We are using weblogic 10 Mp1 and not sure whether it is supported in this version.
I’ve another issue with multicast configuration. We have configured independent JMS servers for each managed nodes (there are 2 managed nodes in cluster) and each node has one JMS module configured. The two JMS modules have same queue & topic names. Basically same queues/topics are published in two managed nodes.
Now when I use multicast, the cluster wide JNDI update happens and when I bring down one managed node, it removes all its JNDI references and it does it for rest of the nodes in cluster as well. Now this is a problem for us as other nodes which are up & running stops working because its JNDI references are removed as well.
I don’t see this problem with unicast though. In unicast everything works fine but I’m not sure round robin algorithm works correctly here. In multicast if I look JNDI tree of Man1 and open an EJB, it is cluster reference and has both Man1 & Man2 in toString, which means JNDI is synching up EJB references. In unicast if I open JNDI tree for Man1 and open an EJB, it has only Man1 reference reflecting in toString, which means each managed node has only its references.
I did some load balancing testing (we have swing client which does JNDI connectivity to weblogic on login) and in multicast it was almost 50-50 where as in unicast it is 35-65. Btw, both managed nodes are running on same physical box and I’m giving cluster url as IP1:7025,IP1:7026.
My question is does unicast do appropriate load balancing for round robin. If yes, should I make few EJB’s clusterable so that they get replicated to all managed nodes and cluster understand user sessions & does appropriate load balancing based on that?
Regards,
Prakash
October 15th, 2011 on 10:21 am
Ravish/Jay,
Could you please help on understaing the above issue.
With multicast, shutting down one managed node is removing JNDI references for queues & topics from other nodes as well which are up & running but with unicast id I don’t see this issue. Multicast does cluster wide JNDI update. How about unicast? Does it do cluster wide JNDI update as well?
And in our configuration, we are not making EJB & JMS clusterable. In multicast, JNDI tree for a managed shows both managed node references for EJB objects and client load balance is ~ 50-50. In unicast, JNDI tree for a managed node shows only it’s reference for EJB objects and load is not that equally distributed.
Btw, both managed nodes are running on same physical box and I’m giving cluster url as IP1:7025,IP1:7026.
Regards,
Prakash
October 15th, 2011 on 4:22 pm
Ravish/Jay,
I saw this issue reported in some other blog as well and I’ve put my questions there too. Looks like there is bug in Weblogic which is fixed in later versions. Could you please advise me on how to fix this issue?
http://angraze.wordpress.com/2011/01/29/weblogic-10-3-jndi-issue-with-versioned-app-during-cluster-restart/comment-page-1/#comment-11282
Regards,
Prakash
October 15th, 2011 on 4:57 pm
Hi Prakash,
Let me know if my understanding for your architecture is correct, because I am little confused about your configuration
Cluster-1 has MS-1 and MS-2
JMSServer-1 is targeted on MS-1
JMSServer-2 is targeted on MS-2
JMSModule-1 has Queue-1 and Topic-1 with ConnectionFactory-1
JMSModule-2 has Queue-2 and Topic-2 with ConnectionFactory-2
Where
JMSModule-1 is targeted on MS-1
JMSModule-2 is targeted on MS-2
Queue-1 is targeted on JMSServer-1
Queue-2 is targeted on JMSServer-2
Topic-1 is targeted on JMSServer-1
Topic-2 is targeted on JMSServer-2
ConnectionFactory-1 is targeted on MS-1
ConnectionFactory-2 is targeted on MS-2
Your application is been targeted on the Cluster-1 and you need that you should get the advantage of load-balance and fail-over for your application on this architecture am I correct? If yes then I believe your configuration is not correct.
However just for sharing, using Unicast is been recommended by WLS to get more information you can have a look at the below link
Topic: WebLogic Server Communication In a Cluster
http://download.oracle.com/docs/cd/E11035_01/wls100/cluster/features.html#wp1007001
Regards,
Ravish Mody
October 16th, 2011 on 11:03 am
Thanks Ravish for your response. Below is my configuration.
Cluster-1 has MS-1 and MS-2
JMSServer-1 is targeted on MS-1
JMSServer-2 is targeted on MS-2
JMSModule-1 has Queue-1 and Topic-1 with ConnectionFactory-1
JMSModule-2 has Queue-1 and Topic-1 with ConnectionFactory-1
Both modules have same queue/topic names and connection factories with same name.
Where
JMSModule-1 is targeted on MS-1
JMSModule-2 is targeted on MS-2
Queues with same name are published to both JMS servers
Queue-1 is targeted on JMSServer-1
Queue-1 is targeted on JMSServer-2
Topics with same name are published to both JMS servers
Topic-1 is targeted on JMSServer-1
Topic-1 is targeted on JMSServer-2
Connection factories have got same name in both managed servers.
ConnectionFactory-1 is targeted on MS-1
ConnectionFactory-1 is targeted on MS-2
And for our current release, we got requirement is only for load balancing. We didn’t commit for failover as it’s not a mission critical application. Right now our application is memory intensive and we are expecting more users/data growth. With clustering we are trying to reduce memory footprint for each JVM instance and allowing them to do actions in parallel, which is not possible in today’s architecture as it will throw OOM.
The client is a swing UI and when users login, we do load balancing for user sessions based on round robin affinity. We didn’t choose load balancing for JMS as it’s a legacy application and application designed in such a fashion which makes it difficult to take all benefit of Weblogic clustering. The application has lot of static & global synchronization which works in single JVM but in cluster it creates problem.
I downloaded latest weblogic 10.3.5 and in this version, unicast also updates cluster wide JNDI tree and here multicast & unicast works in same way and shutting down one managed node removes JNDI references from other managed node also.
But in Weblogic 10 MP1 (same application deployed), unicast doesn’t update cluster wide JNDI (only multicast does it) and with unicast, I don’t see any issue when I bring down one managed node. It doesn’t remove the JNDI references in another managed node.
Is there a way to stop this JNDI removal in another managed node when you shutdown the other ones?
And there is an option to set client Id in 10.3.5 admin console for durable MDBs but in 10 MP1 it is not there.
Regards,
Prakash
October 16th, 2011 on 12:12 pm
Hi Prakash,
You could have used UDQ and UDT in a single JMSModule however as you said that your JMS application is a legacy and would have issue then I believe you can we would not have this option.
However your issue looks to be a BUG in WLS, hence would suggest you to contact WLS support team and provide them with a simple test case and the steps to reproduce this issue also do mention that it is working in WLS 10.1 but not in other versions so that they can get a fix for it if its a BUG.
Also would suggest you to share with us the solution/work-around if any given by the WLS support team so that others who are having similar issue can get help from it.
Regards,
Ravish Mody
October 16th, 2011 on 4:07 pm
Hi Prakash C Rao,
In Addition to Ravish’ reply. Please make sure that you define different QueueName/TopicName and ConnectionFactory name inside different modules. In one of the issue i have seen that duplicate Names were causing the issue.
JMSModule-1 has Queue-1 and Topic-1 with ConnectionFactory-1
JMSModule-2 has Queue-1 and Topic-1 with ConnectionFactory-1
.
.
Keep Posting 🙂
Thanks
Jay SenSharma
October 16th, 2011 on 7:27 pm
Thanks Ravish/Jay for your response. Yes, I’ll try UDQ and UDT as you suggested and see how it goes. And I’ll also try to develop a sample and if I see same issue then I’ll see if I can get Oracle weblogic support.
Jay,
I tried giving different names for queue/topic connection factories and queue/topic names but I still see the issue. In unicast, I don’t see any issue but in multicast restaring one managed node, removes queue/topic definations from other managed nodes as well.
1) man1-jms-module configuration
softface/QueueConnectionFactory
Non-Persistent
10
1
600
true
softface/TopicConnectionFactory
10
1
600
false
man1-JMSServer
SoftfaceJMSTemplate
priority
queue/classification/RESULTSQ
2) man2-jms-module configuration
softface/QueueConnectionFactory
Non-Persistent
10
1
600
true
softface/TopicConnectionFactory
10
1
600
false
man2-JMSServer
SoftfaceJMSTemplate
priority
queue/classification/NOTIFYQ
If you look at these, I’ve kept different names for topic/queue connection factories and topic/queue names in two managed node JMS modules.
The JNDI names have to be same as it’s the same application deployed into both managed nodes (same .ear deployed to cluster itself) and they will be refering to same JNDI names.
Regards,
Prakash
October 16th, 2011 on 7:30 pm
The xml output didn’t come properly in my previous comment and now I’m pasting it again after removing symbols.
connection-factory name=”man1-SoftfaceQueueConnectionFactory”
jndi-namesoftfaceQueueConnectionFactoryjndi-name
default-delivery-params
default-delivery-modeNon-Persistentdefault-delivery-mode
default-redelivery-delay10default-redelivery-delay
default-delivery-params
client-params
messages-maximum1messages-maximum
client-params
transaction-params
transaction-timeout600transaction-timeout
xa-connection-factory-enabledtruexa-connection-factory-enabled
transaction-params
connection-factory
connection-factory name=”man1-SoftfaceTopicConnectionFactory”
jndi-namesoftfaceTopicConnectionFactoryjndi-name
default-delivery-params
default-redelivery-delay10default-redelivery-delay
default-delivery-params
client-params
messages-maximum1messages-maximum
client-params
transaction-params
transaction-timeout600transaction-timeout
xa-connection-factory-enabledfalsexa-connection-factory-enabled
transaction-params
connection-factory
queue name=”man1-classificationResultsQueue”
sub-deployment-nameman1-JMSServersub-deployment-name
templateSoftfaceJMSTemplatetemplate
destination-keyprioritydestination-key
jndi-namequeueclassificationRESULTSQjndi-name
queue
connection-factory name=”man2-SoftfaceQueueConnectionFactory”
jndi-namesoftfaceQueueConnectionFactoryjndi-name
default-delivery-params
default-delivery-modeNon-Persistentdefault-delivery-mode
default-redelivery-delay10default-redelivery-delay
default-delivery-params
client-params
messages-maximum1messages-maximum
client-params
transaction-params
transaction-timeout600transaction-timeout
xa-connection-factory-enabledtruexa-connection-factory-enabled
transaction-params
connection-factory
connection-factory name=”man2-SoftfaceTopicConnectionFactory”
jndi-namesoftfaceTopicConnectionFactoryjndi-name
default-delivery-params
default-redelivery-delay10default-redelivery-delay
default-delivery-params
client-params
messages-maximum1messages-maximum
client-params
transaction-params
transaction-timeout600transaction-timeout
xa-connection-factory-enabledfalsexa-connection-factory-enabled
transaction-params
connection-factory
queue name=”man2-classificationNotifyQueue”
sub-deployment-nameman2-JMSServersub-deployment-name
templateSoftfaceJMSTemplatetemplate
destination-keyprioritydestination-key
jndi-namequeueclassificationNOTIFYQjndi-name
queue
Regards,
Prakash
October 16th, 2011 on 11:25 pm
Hi Prakash,
I am not sure, in your architecture, if there is a possibility to use Local jndi name rather than using Global jndi name.
If your architecture permits that, you can try doing that and it will not replicate the jndi name across cluster.
October 27th, 2011 on 1:26 pm
Thanks Vishal for your inputs as well. I tried local JNDI and it works (doesn’t mess up JNDI references for JMS objects on other managed nodes when one node is down). In fact, in weblogic 10MP1 there seems to be some issue with local JNDI as well where I’m not able to give same local JNDI name in two managed nodes but in Weblogic 11g it works.
I’m not sure it works for our architecture as some of the JMS queues/topics are referenced from remote machines and local JNDI might not work for this. There are few queues/topics which is referenced within managed node itself and this will work.
I’ve given up on multicast as it is removing on JNDI references for JMS objects on other managed nodes when one goes down. This works in unicast mode in Weblogic 10MP1 and we decided to stick to this. However, if we move to 11g in future, this problem is there in unicast as well and I need to find some solution at some point in time keeping in mind that we might need to upgrade to 11g at some point in time.
Regards,
Prakash
January 13th, 2012 on 12:30 pm
I thought of posting some of the workaround I did on creating cluster url for EJB load balancing so that other people can taken benefit out of it.
I didn’t find suitable links for EJB load balancing and hence posting this information here.
As everyone is aware there are two ways to create common cluster url for EJB load balancing. The 1st one being comma separated and 2nd one is to create a common DNS name which maps to multiple IP addresses.
http://docs.oracle.com/cd/E13222_01/wls/docs81/cluster/setup.html (refer to Cluster address section)
For our project I tried creating DNS approach (for usability purposes where common DNS is much better than comma separated) where each managed node had dedicated IP/DNS address with common cluster DNS created mapping to these. Below is what has been tried out.
MAN1 -> DNS1 or IP1, port 7001
MAN2 -> DNS2 or IP2, port 7001
C_DNS -> mapped to IP1 & IP2. (C_DNS contains multiple A records)
Admin node IP is not part of C_DNS mapped IP list.
Cluster URL -> C_DNS:7001
TTL -> set was initially set to 1 hour, changed to 5 minutes, 1 second later for subsequent testing.
The EJB load balancing was set to round robin affinity and initially TTL was set to 1 hour (which was changed to 5 minutes & 1 second later for subsequent testing).
As mentioned in Weblogic documentation, when clients obtain an initial JNDI context by supplying the cluster DNS name, weblogic.jndi.WLInitialContextFactory obtains list of all addresses that are mapped to the DNS name. This list is cached by WebLogic Server instances, and new initial context requests are fulfilled using addresses in the cached list with a round-robin algorithm. If a server instance in the cached list is unavailable, it is removed from the list. The address list is refreshed from the DNS service only if the server instance is unable to reach any address in its cache.
Using a cached list of addresses avoids certain problems with relying on DNS round-robin alone. For example, DNS round-robin continues using all addresses that have been mapped to the domain name, regardless of whether or not the addresses are reachable. By caching the address list, WebLogic Server can remove addresses that are unreachable, so that connection failures aren’t repeated with new initial context requests.
This means the Weblogic maintains a cache and does round robin instead of relying on DNS round robin. So irrespective of TTL value that is set, Weblogic would do round robin for each new Initial context that is created.
I was hoping that this would work as mentioned in Weblogic documentation but it didn’t give consistent behavior when I tested in different environments (dev, QA etc). It worked in some environment where each initial context was load balanced in round robin fashion and load balancing was 50-50 but in some environment it was 70:30 or 80:20. Weblogic configuration was identical in all environments and it was just that it had different DNS servers in different geographical locations.
I tested this up to 1000 connections and I couldn’t get a consistent behavior in all environments. Later I dropped the idea of having this in place and found a work around which still uses cluster DNS (for usability purposes) but internally the client code constructs a comma separated url by getting all IPs mapped to cluster DNS by hitting the DNS server. The initial context is created with comma separated url and used for load balancing. I tested this in all environments and it gave me consistent results where over a period of time the load balancing is almost 50:50
Below is the approach taken.
MAN1 -> DNS1 or IP1, port 7001
MAN2 -> DNS2 or IP2, port 7001
C_DNS -> mapped to IP1 & IP2. (C_DNS contains multiple A records)
Admin node IP is not part of C_DNS mapped IP list.
Cluster URL -> C_DNS:7001
From client code, take C_DNS and get all A records mapped to this C_DNS and constructs a comma separated url. All A records can be obtained from InitialDirContext as indicated in the below url.
http://mindprod.com/jgloss/dns.html
Our product doesn’t hits DNS server only few times and hence it doesn’t affect the performance of the application.
I’ve been getting lot of help from this blog and I really appreciate the time & effort being spent on helping us.
Regards,
Prakash
January 13th, 2012 on 1:13 pm
Hi Prakash,
This is a really great explanation shared by you. We would surely add the above comment link on our EJB page, so that others can get benefit out of it.
We would also like to share 50 Magic Points with you for sharing this information.
Keep Posting 🙂
Regards,
Ravish Mody
December 28th, 2012 on 6:52 am
Hi Team,
Two managed MS1, MS2 servers is there in a cluster(same cluster). how can you say that load is there on only one server?
March 20th, 2013 on 8:18 pm
Hi Team,
I have some issues in cluster communication/distributed topic receive issue.
I tried out if the cluster communication is happening as expected using following utility
java weblogic.cluster.MulticastMonitor
out of the 2 nodes in a cluster, the server prints the heartbeat of only one server.
What could be the reason for this? what should I do now?
Kindly help.
Regards,
Amrita
March 25th, 2013 on 3:40 pm
Hi Rene,
thanks for this wonderful suggestion:)
Iam using weblogic 10.3.0. WHat are the risks I should consider when moving from multicast to Unicast?
Will it anyway relate to node synchronization issue?
Please suggest
Thanks,
Amrita
December 10th, 2013 on 2:58 pm
Is there any utility or java program for testing weblogic 12c unicast cluster