Wednesday, December 9, 2015

Deleted BLOB data still taking up space in Oracle

If you have database tables with  CLOB/BLOB data columns, one thing you should know is that deleting these values via normal delete queries will not work completely. That is the values will of course be removed from the tables, but the space taken up by these values (number of bytes) will remain the same. This might not be visible if you are processing small amount ( less than 4 mb) data, but once  you start processing large messages this will eventually become an issue.

The reason for this is LOB values only keep a reference. In order to free up the space consumed by these LOBS you could follow below steps.

1. For analysing purposes, execute the following command which would list down the number of bytes of LOB columns. Replace 'YOUR_VALUE' with your custom values.

select  sum(bytes),dba_extents.owner , dba_extents.segment_type, dba_extents.tablespace_name
 , dba_lobs.table_name, dba_lobs.column_name
 from dba_extents , dba_lobs
 where dba_extents.tablespace_name like 'YOUR_VALUE'
 and segment_type in ('LOBSEGMENT','LOBINDEX')
 and dba_extents.owner ='YOUR_VALUE'
 and dba_lobs.table_name in ('YOUR_VALUE','YOUR_VALUE')
 and dba_lobs.owner=dba_extents.owner
 and dba_lobs.segment_name=dba_extents.segment_name
 group by segment_type, dba_extents.owner, dba_extents.segment_name, dba_extents.tablespace_name,dba_lobs.table_name ,dba_lobs.column_name
 order by dba_extents.segment_name;

So the solution you could perform is as below.

1. Create a temporary table and move the required data to this table. ( This temporary table should be created in a separate table space associated with the separate data file.)
2. Once the data move is completed, truncate the original table.
3. Once the truncate is completed, you can insert the required data from the temporary table to the original table.
4.Once data transfer is completed, truncate and drop the temporary table.

If you execute above query before step 1, after step 2 and after step 3  you should be able to see the difference in the number of bytes, which suggests that the actual deleted value space is released.

Thursday, November 26, 2015

Oracle Dynamic SQL scripts

There are scenarios where you used scheduled jobs to run some script procedure. In such a situation if you need to create/drop temp tables, with scheduled jobs it might cause errors if the temp table creation/deletion is done outside the procedure. Therefore in order to add all that logic into a single procedure you need to use dynamic SQL.

Below is a sample oracle  procedure written with dynamic sql. In dynamic sql in order to execute a query you need to add your query login within a statement.So the first step would be to define these statements inside the procedure.

  stmt1  VARCHAR2(2048);
  stmt2 VARCHAR2(2048);
  stmt3 VARCHAR2(2048);
  stmt4 VARCHAR2(2048);

Next we can initialize the defined statement variables and execute them accordingly.



  stmt4 := 'DROP TABLE TEMP_DATA';



What happens in this script is ,

1. create a temp table.
2.Insert data into temp table from an existing table.
3. Add temp table data into another table.
4.Drop temp table.

Note that the statements need to be within ' ' .

Next if you execute this procedure you might receive an error stating user doesn't have sufficient privileges. This is due to oracle security model, and in order to execute dynamic queries you need to specify the permissions given to execute this procedure. Here I'm providing the permissions available of the currently logged in user. This can be done like below by placing an 'AUTHID'.

You could look into 

which explains other types of users that can be defined with AUTHID property.

Friday, November 20, 2015

Revoke OAuth Access Token from Soap Endpoint

In Wso2 Identity Provider when you need to revoke an OAuth token endpoint, there are two options that you could follow.

1. Rest endpoint
2. Soap endpoint

Rest endpoint is detailed and explained in  However if you need to revoke the token based on the resource owner, then you could go for Soap endpoint revoke operation. A good explanation on these differences can be found in

These are the steps to follow in order to try out this soap endpoint. For this example I have used IS 5.0. + Service Pack 1.

The operation we are going to invoke is admin service, OAuthAdminService's  operation revokeAuthzForAppsByResourceOwner. Therefore make sure that you set <HideAdminWsdl> to false in <IS_HOME>/repository/conf/carbon.xml.

1. Create a new Service Provider (RWM) through management console and enable OAuth.
2. Note down the Client key and client secret values of the new SP. (under OAuth section)
2. Make the grant type password.
3. Create a new user (user1) and provide login permission for the user.( through internal role)
4. Use below curl command and invoke an access token for the newly created user 'user1'.

Here the format would be,

curl -v -k -X POST --user clientKey:clientSecret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d 'grant_type=password&username=user1&password=test123' https://localhost:9443/oauth2/token

For this example I have used,
curl -v -k -X POST --user mlwL69uKnERmOwDnygn2kwAgVJca:vhW8WZ1qSHsAX6RZH7YQ7QvxVwwa -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d 'grant_type=password&username=user1&password=test123' https://localhost:9443/oauth2/token

5. Once you invoke this, you should get a json response with the access token and refresh token values.

6. You could check if this user1 gained authorization for the SP RWM, by accessing
 https://localhost:9443/dashboard/  as 'user1' and go to autorized apps - > view details. If previous invoke was successful, you should see that RWM app listed under this page.

7. Now in order to revoke create a soapui project with https://localhost:9443/services/OAuthAdminService?wsdl and send a request to
revokeAuthzForAppsByResourceOwner operation like below.

Make sure to enable 'authenticate pre-emptively option ,else you would get an illegal login attempt error, and the token will not be revoked. Also the basic:auth credentials should be the user credentials that you used to invoke the the token in the first place. 

8. Now if you go to admin dashboard and view authorize apps, you should not be able to see the previous RWM listed. You can further clarify that the token got revoked, by invoking another token for the same user, and see if it is a new value given.


Wednesday, November 18, 2015

Workaround for ConnectionPool closed error at API Manager Gateway

Hi in a clustered setup if there is a continuous high load , following issue can occur at the gateway node.

An error occurred while submitting resources for indexing {org.wso2.carbon.registry.indexing.ResourceSubmitter}
org.wso2.carbon.registry.core.exceptions.RegistryException: Failed to commit transaction.
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.commitTransaction(
    at org.wso2.carbon.registry.core.jdbc.dao.JDBCLogsDAO.commitTransaction(
    at org.wso2.carbon.registry.core.jdbc.dao.JDBCLogsDAO.getLogList(
    at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getLogs(
    at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.getLogs(
    at org.wso2.carbon.registry.core.session.UserRegistry.getLogsInternal(
    at org.wso2.carbon.registry.core.session.UserRegistry.access$3600(
    at org.wso2.carbon.registry.core.session.UserRegistry$
    at org.wso2.carbon.registry.core.session.UserRegistry$
    at Method)
    at org.wso2.carbon.registry.core.session.UserRegistry.getLogs(
    at org.wso2.carbon.registry.indexing.ResourceSubmitter.submitResource(
    at java.util.concurrent.Executors$
    at java.util.concurrent.FutureTask.runAndReset(
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
    at java.util.concurrent.ScheduledThreadPoolExecutor$
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
Caused by: java.sql.SQLException: PooledConnection has already been closed.

Here we do not need registry indexing to happen at the gateway node. Therefore we could disable the indexing at the gateway node to overcome this issue. However, in APIM 1.8.0 these index disabling configs are not available. So what you could do is to delay indexing at the gateway node. For this go to /conf/registry.xml file(of the gateway node) and under <indexConfiguration> , increase below interval value to few hours.


Tuesday, November 10, 2015

BPEL process and fault handeling

When it comes to bpel processes, you can handle the faults been thrown by using fault handlers. Suppose the process you invoke will throw custom fault responses. Then in your bpel process you could catch them and perform another sequence.

This is the definition of a fault handler.

        <bpel:catch faultName="ns2:fault1" faultVariable="fault1"
                <!-- your custom sequence -->

        <bpel:catch faultName="ns2:fault2" faultVariable="fault2"
                <!-- your custom sequence -->

                <!-- your custom sequence -->


As above you could define any number of different catch sequences. 'catchAll' could be used to catch any other that didn't match other clauses. However the use of the faultVariable is, you could access the faults parameters from the fault variable defined. For example, if we are passing a parameter named 'errorCode' from the fault response, we could access it from the fault variable like below.



Sunday, November 8, 2015

Adding dispatch logic to SOAPUI mock services

There might be instances, where you have few variations of a response that needs to be sent based on a logic. You could do this by adding a dispatch script including your logic like below.

1.Go to required service's process interface and change DISPATCH option to 'script'.

In this example I will be dispatching a specific response based on an element of the incoming request.

Following is the sample script .

// create XmlHolder for request content

def holder = new mockRequest.requestContent )
def arg1 = holder["//category"]

//if above element is empty
if( ! arg1 ))
 return "Invalid Input Response"

if(arg1 == "Books" )
return "SendBooksReponse"

else if(arg1 == "Articles")
return "SendArticlesResponse"

So here what we are returning would be the relevant mock response's name.

Tuesday, October 27, 2015

View WSO2 BPS-ODE related db tables of H2

In order to view the inbuilt H2 db- ode tables of wso2 bps you need to follow these steps.

1. Unzip bps pack and go to <Product_Home>/repository/conf/ carbon.xml.

2. Uncomment H2 db configuration like below.

        <property name="web"/>
        <property name="webPort">8082</property>
        <property name="webAllowOthers"/>       

3. Start bps server and go to http://localhost:8082

4. Log in with following details .

JDBC URL: jdbc:h2:<file path to CARBON_HOME>/repository/database/jpadb
username: wso2carbon
password: wso2carbon

Friday, October 9, 2015

Performing operations with BPMN 2.0-Service Task

BPMN 2.0 is a standard / language to execute workflows.  Just like WS-BPEL ,BPMN can be used to create a process model, that contains a series of activities to be executed. In WSO2- BPS 3.5.0 release, bpmn suppport is introduced, which uses Activiti as the engine. To get a general idea on how/when bpmn can be used, you could go through

In this blog post, we will look at how to execute the services introduced by Activiti within a service task of our own. Service task is a type of task provided in bpmn, which is simply a java class. In a practical scenario, a service task is used to perform actions such as reassigning a user for a task, creating new tasks, delegating tasks etc. These operations can be done quite easily with the services provided by Activiti. [1] [2]

Let us take an example. We have a certain task (approve vacation request) that is assigned to admin user by default, and if he does not complete his task within a given time duration ( 2 days   ) we are going to assign it to another user.  In order to get this working with bpmn, you need to address below points.

1. Include a time out on "approve vacation request"task. : TimeBoundary Event
2. On time out call a service task to perform the delegation :  Service Task

Therefore , you would need to process your model with something similar like below.

Now that you have the basic model designed, let us look into the next level of details.  So in this scenario your "Approve Vacation Request" tasks main configs-> assignee would be set to 'admin'.Also you can set the duration of your timeBoundaryEvent. In your service task you need to address these points.

1. Get  the task with the task name as "Approve Vacation Request" and  is assigned to a user named 'admin'.
2.  Delegate that queried task to a new user. (Here you can pass the new user directly, or you could pass a form variable with execution.getVariable() )

Following is a sample  service task written to execute above points.  Here we are using methods provided by the TaskService Of Activit[1], so you would need to create an instance of the TaskService first.

  • Next by using this taskService, we are creating a query that search for  a task with the relevant task name ,and that is assigned/(or is a task candidate) to a user called admin.
  • Then we get the queried task list, and for each available task we would set the new assignee by executing method setAssignee().

As you can see it is a matter of fact getting to know the available services/ methods of activiti ,and you could create your own service task that could automate many processes easily. If you need to deploy this sample, other than deploying the .bar file in BPS management console, you need to create the jar of the service task and add it to <product_home>/repository/components/lib folder.


Tuesday, September 29, 2015

Cluster Guide changes for wso2 BPS 3.5.0

Please follow this link as the general cluster guide for bps 3.5.0.  (yet to be released)   But make sure you add below changes for your cluster setup, if you are trying to cluster bps 3.5.0.

1. Regarding the persistence database (BPS_DB in above blog post) ,  you will not find a file as mentioned in this new release. Therefore you need to add the configurations of your BPS_DB to   bps-datasources.xml file in <product_home>/repository/conf/datasources.

Suppose you create a MYSQL database called BPS_DB, you will need to add a similar configuration to bps-datasources.xml file. You need to change the db name, driverClassName,username,password according to your custom settings.

            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

Note that this needs to be done in both master/slave nodes.

Tuesday, September 22, 2015

Creating jar files with external dependencies using maven-antrun plugin

In most projects, you would want to write your own logic for the build to happen. You might want to copy certain files to different locations, compile source files etc. One easy way to do this is using maven-antrun plugin. By defining this in your pom file, you can execute the target commands that you perform when  creating a build.xml file.  Let us take a look at an example. Here I need to perform two tasks on a given sample at runtime.

1. Compile the source (.java) files provided.  (javac)
2. Create a jar from compiled files.    (jar)

First you need to add the plugin [2] to your pom.xml like below.



If you were to create a simple build.xml file [1] ,you would define different targets and perform operations under these targets. In the same way, you can define tasks under this plugin, and add any operation that you would add under task, instead of target. In below sample I'm adding my first task , that is to compile and create a jar.
                             <dependencyfilesets prefix="jarDepends." />
                                <property name="build.compiler" value="extJavac" />
                                <property name="classdir" value="target/class" />
                                <property name="jardir" value="target/jar" />
                                <property name="project" value="BookOrder" />
                                <property name="main-class" value="newProject.BookOrder" />
                                <mkdir dir="${classdir}" />
                                <path id="build.classpath">
                                    <fileset refid="jarDepends.maven.project.dependencies" />

                                <javac srcdir="${basedir}" destdir="${classdir}" classpathref="build.classpath">
                                <mkdir dir="${jardir}" />
                                <jar destfile="${jardir}/${project}.jar" basedir="${classdir}">
                                        <attribute name="Main-Class" value="${main-class}" />


What you need to remember here is that, you are running the executions under this plugin, which is a separate space. So in above sample, I'm trying to compile a source file that has external dependencies. Though these dependencies are listed in the pom, you need to load them into this plugin for the compilation to succeed.

The easy way to do is by defining fileset ref id with 'maven.project.dependencies' [3] (a property available with antrun plugin) that would load all the dependencies from the pom to the plugin. You could also load individual dependencies like below.

<fileset refid="mydeps.junit:junit:jar"/>
fileset refid=""/>

 So once you have them loaded, you could create a build path, and refer the loaded dependencies to that. This is done by above lines marked in green. So then in your javac command, you can add the classpath, saying to retrieve these dependencies when compiling the given sources.

In the next command 'jar' we are creating the jar file from the .class file that was created from 'javac' command.

Sometimes when using javac command with maven plugin, it tends to give an error saying 'Maybe Jre is defined instead of JDK' and fail the process. This seems to be a known issue , and the fix is to add  below property.
<property name="build.compiler" value="extJavac" />


Tuesday, September 1, 2015

Customize login page of WSO2 Identity Server based on different tenants

Following post, explains how to customize the login page of Identity Server based on different Service Providers. In this post I will explain how you can customize SAML2 SSO login page based on different tenants.

As explained in above post, you could go ahead with either two methods, and here I will be talking bout the method where to use a JSP to direct to relevant custom login page.  The default login page is located at <IS_HOME>/repository/deployment/server/webapps/authenticationendpoint/login.jsp.

1.Rename login.jsp to default_login.jsp.
2. Create a new file with file name as login.jsp, and add below content. 
Note: Make sure you don't add any additional spaces as this would lead to errors later. Therefore you can open this file in an IDE and check for any errors.

String tenant = request.getParameter("tenantDomain");

if (tenant.equals("")) {
 RequestDispatcher dispatcher = request.getRequestDispatcher("abc_login.jsp");
 dispatcher.forward(request, response);
} else {
 RequestDispatcher dispatcher = request.getRequestDispatcher("default_login.jsp");
 dispatcher.forward(request, response);

Above is a sample code , so you could add different if cases, based on your tenants. Now what you need to do is, make sure your SP sends  request parameter named 'tenantDomain' , so this code snippet can pick up that value.  

Useful mac terminal commands!

This post contains pretty useful commands in a mac terminal.

mdfind  : similar to grep but by adding -onlyin you can restrict the search.
screencapture : lets you take screenshot of the current terminal window
pbcopy/pbpaste : copy/paster terminal content

Wednesday, August 26, 2015

Using Validate Mediator with source defined

In WSO2 ESB, you can use validate mediator to define your custom xsd, and validate the input you are getting. If it doesn't comply to the given xsd, you can execute a custom flow and send a fault message. Most examples available is where you don't define the source property so that the first child of the SOAP request will be evaluated. This is just a sample proxy that shows how you can add the xpath to source property.

<proxy xmlns=""
         <log level="full">
            <property name="Message" value="Inside Insequance"/>
         <validate xmlns:nfs=""
            <schema key="conf:/validateCustomer.xsd"/>
               <log level="custom">
                  <property name="MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
               <makefault version="soap11">
                  <code xmlns:tns="" value="tns:Receiver"/>
                  <reason value="Invalid request received"/>
               <property name="RESPONSE" value="true"/>
               <header name="To" expression="get-property('ReplyTo')"/>

A sample request for above would be like,
<soapenv:Envelope xmlns:soapenv="" xmlns:nfs="">
<nfs:CustomerReq  xmlns:nfs="">

Finally you can add your xsd file to mentioned location in schema key. (Under browse->resources->config ->add resource)

Friday, August 21, 2015

Common issues when creating API Manager cluster

There are some common issues faced when users create WSO2 API manager clusters. Here are some points you could follow when troubleshooting.

1.  Make sure all relevant nodes are added in each others' axis2.xml cluster members. For example, if you have 4 Gateway nodes (2 Managers and 2 Workers) you need to make sure that you add all other members under cluster members (axis2.xml) , so that all the nodes are synced up. If this is not done properly, you might come across issues where some of your nodes are not updated with all changes.(Missing apis,missing edits done for apis)

You can check if all nodes are getting synced properly , by checking repository/deployment/server/synapsconfig/default/apis   sections of each node and see if all published APIs are listed.

2.  When you are fronting gateway nodes with a load balancer, your publisher/store node's api-manager.xml should point to the load balancer url.

3. When you are installing an external key manager (WSO2 Identity Server) , make sure that you set the url of IS, in <APIKeyManager> section of api-manager.xml.

Installing features in WSO2 products

When you are installing features in WSO2 products, make sure you look into following matrix. So that you can identify the supported platform version and related product version.

Monday, July 27, 2015

Array creation in BPEL

Most of the time, the input/requests we get in BPEL responses are not simple one line values. They can be complex xsd files where , data schema are used. In a situation like that, you need to handle the input payload, manipulate the content and present the values in the form of arrays.  So let us consider this example below.

In this example, I'm getting an input from the client/user (which will be a multiple set of string values). I need to get this list of values, loop through them (you can add your additional business logic/conditions here) and finally create an array with that value set.

In the first assign step I will be initializing a counter variable in order to loop the values ,and inside the while loop I'm adding each property to an array(Assign1) and incrementing the counter variable(Assign2).  Let us see how we can add the logic to these steps.

1.  These are my request and response properties. (Which are considered in receiveInput and replyOutput steps accordingly)

  <element name="ArrayCheckRequest">
                        <element name="input" type="string" maxOccurs="unbounded"/>

            <element name="ArrayCheckResponse">
                        <element name="result" type="string" maxOccurs="unbounded"/>

What you need to remember here is that since I'm expecting a multiple set of values( set of string values in this case) I need to make sure that I allow the input property to have multiple values. This is done by setting the maxOccurs property to unbounded.  I have done the same for my final response as well, since i will be sending out an array.

2. Assign :   In the first assign step property details, I'm initializing a counter variable so that I can loop through the values later. In the second copy element, I'm initializing the result variable of the final response.

<bpel:assign validate="no" name="Assign">
                <bpel:to variable="counter"></bpel:to>
                        <tns:ArrayCheckResponse xmlns:tns=""
                <bpel:to part="payload" variable="output"></bpel:to>
Things to Note: I was wasting lot of time when i first created this sample, because I missed out two points.

*. In xpath, array indexing starts from 1 instead of 0 as our usual understanding in Java. So when initializing a counter variable make sure it starts from 1.
*. Before, I was initializing the result variable within the while loop. Because of this, my final response always included only the last value, as it was getting reinitialized at every loop instance. So make sure that this is done outside and before the while loop.

3. While: Following is the condition of the while loop. Again note, since indexing is happening from 1, you need to add = sign as well. I'm using the xpath function count() to get the total occurrence of the input list.

$counter <= count($input.payload/tns:input)

4.Assign1 : Here we are using an ODE xpath function, which requires Xpath 2.0. So make sure you add xpath Language and expression language definitions as it is done below. Else this ode method will not be recognized.

                        <bpel:from expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath2.0"
                            <![CDATA[ode:insert-as-last-into($output.payload/tns:result, $input.payload/tns:input[round($counter)])]]>
                        <bpel:to part="payload" variable="output">
                            <bpel:query queryLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0"><![CDATA[tns:result]]></bpel:query>

Also make sure you add the namespace for ode at the top of bpel process definition.


5.Assign2 : In this step you need to increment the counter variable like below.

<bpel:assign validate="no" name="Assign2">
                        <bpel:from expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0">
                            <![CDATA[$counter + 1]]>
                        <bpel:to variable="counter"></bpel:to>

Friday, July 24, 2015

Adding a new certificate to WSO2 IS Service Provider creation

When adding a new Service Provider(sp) in wso2 Identity Server, to add your security certificates, there is only a drop down box with existing certificate alias names. So in order to insert a custom certificate to this list, you can go through following.

1.Create a .pem file that has the content of your certificate. It should be in following format.

..Add content of xf09certificate...

2. Go to <IS_HOME>/respository/resources/security folder and take a backup of wso2carbon.js.
3. From your terminal, go to above folder location and add below command, which will import your certificate to the keystore. In this example, I'm trying to add sample.pem file with the alias sampleCertificate.

 keytool -import -alias sampleCertificate -file sample.pem -keystore wso2carbon.jks -storepass wso2carbon

3. Once this command is executed, it will ask to Trust the store, enter yes for that question, and you should get a message like below.

'new file added to truststore'

4. Now once you restart IS server, and go to add new Service Provider page, your new file will be listed from it's alias name.

note: If your .pem file is corrupted/ invalid you will get errors at step 3 so make sure your certificate is valid.

Wednesday, July 15, 2015

WSO2 BPS Cluster Guide

Following is a very detailed and easy to understand guide on BPS clustering which also includes information on best practices of BPS cluster deployment.

Tuesday, July 7, 2015

Configuring timeouts with WSO2 ESB

When dealing with proxies/sequences built in wso2 ESB, a common case is if the BE service is not responding for a long time time outs occurs.  So you may need to show some reasonable error logs to a customer on such timeouts.  What you can do is configure a timeout and add an onError fault sequence to be executed.

Let us look at following sample.

            <address uri="http://localhost:9000/services/SimpleStockQuoteService">

         This is an endpoint declaration in a sample proxy, and i'm setting an endpoint timeout of 10 seconds for it. By setting the responseAction property to fault, on timeout it will invoke the fault sequence defined.  markForSuspension tag basically directly marks this endpoint as suspended if the provided errorCode (errors) occur. suspendFailure tag option can be configured so that when certain error code based error occurs, you can retry those requests before sending the endpoint to suspended state. You can gain more insight on this by going through [1].

The point I need to bring out here is , if you need to set the timeout to happen for something less than 15 seconds. In practice such a small timeout is not recommended because a normal endpoint callback would at least take 30 seconds. But in a situation where you need to update the endpoint to timeout in within a small time limit, you need to remember to configure following values.

There is a TimeoutHandler that is executed in an interval of 15 seconds. So the time the callbacks get cleared , can be deviated up to 15
seconds from the value you configure as endpoint timeout(in your proxy configuration). So say that you declare your endpoint timeout as 10 seconds, this will only be taken into count after this initial 15 seconds interval. So if you need it to timeout within 10 seconds , you have to declare following properties as well.  

(These files can be found in repository/conf)
http.socket.timeout=30000  ( depending on your transport type this can be
synapse.global_timeout_interval=20000 (
synapse.timeout_handler_interval=5000 (
<duration>10000</duration> ( in your proxy configuration)

The values should be configured that duration <= http.scoket.timeout .  You can go through [2] to gain more info on this.




Friday, June 12, 2015

Things to remember when encoding query strings

Encoding parameters in a Url is a very common situation but there is one important tip to remember. This is when you are encoding query strings that you would probably retrieve from 'request.parameter()' at some point. Let us look at an example first.

Say you need  to add few query strings to a url and encode it.  If you encode like below,
valueOfa="this is a";
valueOfb = "this is b";
String parameters = URLEncoder.encode("a=" + valueOfa + "&b=" + valueOfb);

you will finally get your queries in a url like this.


So what happens here, when encoding URLEncoder encodes blindly. It will encode your spaces as well as your '=' and '&' which you really do not need to encode as well. So basically at this point you can't retrieve your query values, since there is noway to distinguish the parameters(a,b) or their values.

The solution to this is, encode your queries, and construct each section at a time. See below.

String parameters = URLEncoder.encode("a");
parameters +=  "=";
parameters += URLEncoder.encode(valueOfa);
parameters += "&";
parameters +=  URLEncoder.encode("b");
parameters +=  "=";
parameters += URLEncoder.encode(valueOfb);

So this way, your query strings will appear in the url like below.


You could check this link for more important tips on encoding and decoding.