How to generate large number of access tokens for WSO2 API Manager

We can generate multiple access tokens and persist them to token table using following script.  With that we will generate random users and tokens. Then insert them in to access token table. At the same time we can write them to text file so JMeter can use that file to load tokens. When we have multiple tokens and users then it will cause to increase number of throttle context created in system. And it can use to generate traffic pattern which is almost same to real production traffic.

#!/bin/bash
# Use for loop
for (( c=1; c<=100000; c++ ))
do
ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
AUTHZ_USER=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 6 | head -n 1)
echo INSERT INTO "apimgt.IDN_OAUTH2_ACCESS_TOKEN (ACCESS_TOKEN,REFRESH_TOKEN,ACCESS_KEY,AUTHZ_USER,USER_TYPE,TIME_CREATED,VALIDITY_PERIOD,TOKEN_SCOPE,TOKEN_STATE,TOKEN_STATE_ID) VALUES ('$ACCESS_KEY','4af2f02e6de335dfa36d98192ec2df1', 'C2aNkK1HRJfWHuF2jo64oWA1xiAa', '$AUTHZ_USER@carbon.super', 'APPLICATION_USER', '2015-04-01 09:32:46', 99999999000, 'default', 'ACTIVE', 'NONE');" >> access_token3.sql
echo $ACCESS_KEY >> keys3.txt
done

How to avoid sending allowed domain details to client in authentication failure due to domain restriction violations in WSO2 API Manager

Sometimes hackers can use this information to guess correct domain and resend request with it. Since different WSO2 users expect different error formats we let our users to configure error messages. Since this is authentication failure you can customize auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server. There you can define any error message status codes etc. Here i will provide sample sequence to send 401 status code and simple error message to client. If need you can customize this and send any specific response, status code etc. You can use synapse configuration language and customize error message as you need.

You can add following synapse configuration to auth_failure_handler.xml available in /repository/deployment/server/synapse-configs/default/sequences directory of the server.

<sequence name="_auth_failure_handler_" xmlns="http://ws.apache.org/ns/synapse">
 <payloadFactory media-type="xml">
<format>
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>$1</am:code>
<am:type>Status report</am:type>
<am:message>Runtime Error</am:message>
<am:description>$2</am:description>
</am:fault>
</format>
<args>
<arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
<arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
</args>
</payloadFactory>
<property name="RESPONSE" value="true"/>
<header name="To" action="remove"/>
<property name="HTTP_SC" value="401" scope="axis2"/>
<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
<property name="ContentType" scope="axis2" action="remove"/>
<property name="Authorization" scope="transport" action="remove"/>
<property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
<property name="Host" scope="transport" action="remove"/>
<property name="Accept" scope="transport" action="remove"/>
<send/>
<drop/>
</sequence>


Then it will be deployed automatically and for domain restriction errors you will see following error.
< HTTP/1.1 401 Unauthorized
< Access-Control-Allow-Origin: *
< domain: test.com
< Content-Type: application/xml; charset=UTF-8
< Date: Fri, 16 Dec 2016 08:31:37 GMT
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
< 
<am:fault xmlns:am="http://wso2.org/apimanager">
<am:code>0</am:code><am:type>Status report</am:type>
<am:message>Runtime Error</am:message><am:description>Unclassified Authentication Failure</am:description></am:fault>


In the backend server logs it will print correct error message as follows. So system adminstrative users can see what is the actual issue is.


[2016-12-16 14:01:37,374] ERROR - APIUtil Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
[2016-12-16 14:01:37,375] ERROR - AbstractKeyValidationHandler Error while validating client domain
org.wso2.carbon.apimgt.api.APIManagementException: Unauthorized client domain :null. Only "[test.com]" domains are authorized to access the API.
    at org.wso2.carbon.apimgt.impl.utils.APIUtil.checkClientDomainAuthorized(APIUtil.java:3843)
    at org.wso2.carbon.apimgt.keymgt.handlers.AbstractKeyValidationHandler.checkClientDomainAuthorized(AbstractKeyValidationHandler.java:92)

Swagger code generator support for Micro Services Framework For Java


WSO2 Microservices Framework for Java (MSF4J) is a lightweight high performance framework for developing & running microservices. WSO2 MSF4J is one of the highest performing lightweight Java microservices frameworks. Now swagger code generator will generate micro service skeleton from swagger definition. So you can use this project to convert your swagger definitions to micro service quickly. With this approach you can develop complete micro service within seconds from your swagger definition.

MSF4J generator uses java-msf4j as the default library

java -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar generate \
  -i http://petstore.swagger.io/v2/swagger.json \
  -l msf4j \
  -o samples/server/petstore/msf4j

Before you build/run service replace .deploy(new PetApi()) with your actual service class name in Application.java file like .deploy(new ApisAPI()) then it will start that service. If you have multiple service classes add them in , separated manner.

    new MicroservicesRunner()
            .deploy(new PetApi())
            .start();

To Use-it : in the generated folder try mvn package for build jar, then start your server: java -jar target/micro-service-server-1.0.0.jar (Java Microservice listening on default port 8080)

Run the following command or simply go to http://127.0.0.1:8080/pet/12 from your browser:

curl http://127.0.0.1:8080/pet/12

Simple auto scaling logic for software scalling

Here in this post i will list sample code(not exact code but more like pseudo code) to explain how auto scaling components works. We can use this logic in scalable load balancers to take decisions based on number of requests. 

required_instances =  request_in_fly / number_of_max_requests_per_instance;
if (required_instances > current_active_instance)
{
    //This need scale up
    if(required_instances < max_allowed)
     {
       spwan_inatances( required_instances - current_active_instance );
       wait_sometime_to_activate_instances();
     }
    else{
     //Cannot handle load
    }
}
else
{
    //This is scale down decision
    if(required_instances > min_allowed)
    {
      terminate_inatances( current_active_instance - required_instances );
      wait_some_time_to_effect_termination();
    }
}

WSO2 API Manager 2.0.0 New Throttling logic Execution Order

Here in this post i would like to discuss how throttling happens within throttle handler with newly added complex throttling for API Manager. This order is very important and we used this order to optimize run time execution. Here is the order of execution different kind of policies.

01. Blocking conditions
Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes.

02.Advanced Throttling
If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

03.Subscription Throttling with burst controlling
Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

04.Application Throttling
Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application user combination.

05.Custom Throttling Policies
Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.

Please see below diagram(draw by Sam Baheerathan) to understand this flow clearly.

Screen Shot 2016-09-28 at 7.41.46 PM.png


How newly added Traffic Manager place in WSO2 API Manager distributed deployment

Here in this post i would like to add deployment diagrams for API Manager distributed deployment and how it changes after adding traffic manager to it. If you interested about complex traffic manager deployment patterns you can go through my previous blog posts. Here i will list only deployment diagrams.

Please see below distributed API Manager deployment deployment diagram.

Image result for api manager distributed deployment


Now here is how it looks after adding traffic manager instances to it.

Untitled drawing(2).jpg


How distributed deployment looks like after adding high availability for traffic manager instances.

traffic-manager-deployment-LBPublisher-failoverReciever.png

WSO2 API Manager - Custom Throttling Policies work?

Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.
With the new throttling implementation using WSO2 Complex Event Processor as the global throttling engine, users will be able to create their own custom throttling policies by writing custom Siddhi queries. A key template can contain a combination of allowed keys separated by a colon ":" and each key should start with the "$" prefix. In WSO2 API Manager 2.0.0, users can use the following keys to create custom throttling policies.
  • apiContext,
  • apiVersion,
  • resourceKey,
  • userId,
  • appId,
  • apiTenant,
  • appTenant

Sample custom policy

FROM RequestStream
SELECT userId, ( userId == 'admin@carbon.super'  and apiKey == '/pizzashack/1.0.0:1.0.0') AS isEligible ,
str:concat('admin@carbon.super',':','/pizzashack/1.0.0:1.0.0') as throttleKey
INSERT INTO EligibilityStream;
FROM EligibilityStream[isEligible==true]#window.time(1 min)
SELECT throttleKey, (count(throttleKey) >= 5) as isThrottled group by throttleKey
INSERT ALL EVENTS into ResultStream;
As shown in the above Siddhi query, throttle key should match keytemplate format. If there is a mismatch between the Keytemplate format and throttlekey requests will not be throttled.

WSO2 API Manager - Subscription Throttling with burst controlling works?


Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

Upto API Manager 1.10 this subscription level throttling allowed per user basis. That means if multiple users use same subscription each of them can have copy of allowed quota and it will be unmanageable at some point as user base grows.

Also when you define advanced throttling policies you can also define burst control policy. This is very important because otherwise one user can consume all allocated requests within short period of time and rest of users cannot use API in fair way.

Screenshot from 2016-09-26 17-51-38.png


WSO2 API Manager - How Application Level Throttling Works?


Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application.

Screenshot from 2016-09-26 17-52-46.png

WSO2 API Manager - How advanced API and Resource level throttling works?

If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

Advanced API level policy applicable at 2 levels(this do not support from UI level at the moment, but runtime support this).
  1. Per user level - All API request counts happen against user(per user +api combination).
  2. Per API/Resource level -  Without considering user all counts maintain per API basis.

For the moment let's only consider per API count as it's supported OOB. First API level throttling will happen. Which means if you added some policy when you define API then it will applicable at API level.

Then you can also add throttling tiers at resource level when you create API. That means for a given resource you will be allowed certain quota. That means even if same resource accessed by different applications still it allows same amount of requests.

Screenshot from 2016-09-26 17-53-50.png

When you design complex policy you will be able to define policy based on multiple parameters such as transport headers, IP addresses, user agent or any other header based attribute. When we evaluate this kind of complex policy always API or resource ID will be picked as base key. Then it will create multiple keys based on the number of conditional groups have in your policy.

Screenshot from 2016-09-26 17-54-01.png

WSO2 API Manager new throttling - How Blocking condition work ?

Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes. For blocking conditions we will be evaluate requests against following attributes. All these blocking conditions can add and will be evaluated at tenant level. That means one tenant cannot block other requests etc.
apiContext - if users need to block all requests coming to given API then we may use this blocking condition. Here API content will be complete context of API URL.
appLevelBlockingKey - If users need to block all requests coming to some application then they can use this blocking condition. Here throttle key will be construct by combining subscriber name and application name.
authorizedUser - If we need to block requests coming from specific user then they can use this blocking condition. Blocking key will be authorized users present in message context.
ipLevelBlockingKey - IP level blocking can use when we need to block specific IP address accessing our system. Then this one also apply at tenant level and blocking key will be constructed using IP address of incoming message and tenant id.

Screenshot from 2016-09-26 17-56-30.png

Load balance data publishing to multiple receiver groups - WSO2 API Manager /Traffic Manager

In previous articles we discussed about traffic Manager and different deployment patterns. In this article we will further discuss about different traffic manager deployments we can for deployments across data center. Cross data center deployments must use publisher group concept as each event need to sent all data center if we need global count across DCs. 

In this scenario there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers.
An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL. For this scenario it's mandatory to publish events to each group but within group we can do it two different ways.

  1. Publishing to multiple receiver groups with load balancing within group
  2. Publishing to multiple receiver groups with failover within group

Now let's discuss both of these options in detail. This pattern is the recommended approach for multi data center deployments when we need to have unique counters across data centers. Each group will reside within data center and within datacenter 2 traffic manager nodes will be there to handle high availability scenarios.

Publishing to multiple receiver groups with load balancing within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time in round robin fashion. That means within group A first request goes to traffic manager 01 and next goes to traffic manager 02 and so. If traffic manager node 01 is unavailable then all traffic will go to traffic manager node 02 and it will address failover scenarios.
Copy of traffic-manager-deployment-LBPublisher-failoverReciever(5).png

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

          true
          Binary    {tcp://127.0.0.1:9612,tcp://127.0.0.1:9613},{tcp://127.0.0.2:9612,tcp://127.0.0.2:9613}
{ssl://127.0.0.1:9712,ssl://127.0.0.1:9713}, {ssl://127.0.0.2:9712,ssl://127.0.0.2:9713}
..............

Publishing to multiple receiver groups with failover within group


As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time. Then if that node goes down then event publisher will send events to other node within same group. This model guarantees message publishing to each server group.  



Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png
According to following configurations data publisher will send events to both group A and B. Then within group A it will go to either traffic manager 01 or traffic manager 02. If events go to traffic manager 01 then until it becomes unavailable events will go to that node. Once its unavailable events will go to traffic manager 02.


true
Binary{tcp://127.0.0.1:9612 | tcp://127.0.0.1:9613},{tcp://127.0.0.2:9612 | tcp://127.0.0.2:9613}
{ssl://127.0.0.1:9712,ssl://127.0.0.1:9713}, {ssl://127.0.0.2:9712,ssl://127.0.0.2:9713}
……………………..

Failover throttle data receiver pattern for API Gateway.(WSO2 API Manager Traffic Manager)

In this pattern we connect gateway workers to two traffic managers. If one goes down then other can act as traffic manager for gateway. So we need to configure gateway to push throttle events to both traffic managers. Please see the diagram below to understand how this deployment works. As you can see gateway node will push events to both traffic manager node01 and node02. And also gateway receiver's throttle decision updates from both traffic managers using failover data receiver pattern.

Traffic Managers are fronted with load balancer as shown in the diagram. 

Then admin dashboard/publisher server will communicate with traffic manager through load balancer. When user creates new policy from admin dashboard that policy will be stored in database and publish to one traffic manager node through load balancer. Since we do have deployment synchronize mechanism for traffic managers one traffic manager can update other one with latest changes. So it's sufficient to publish throttle policy to one node in active/passive pattern(if one node is active keep sending requests to it and if it's not available send to other node). If we are planning to use svn based deployment synchronizer then created throttle policies should always publish to manager node(traffic manager instance) and workers need to synchronize with it.
traffic-manager-deployment-LBPublisher-failoverReciever.png



The idea behind failover data receiver endpoint is stopping single-point-of-failure in a system. As the broker deployed in each traffic manager and storing and forwarding the messages, if that server goes down entire message flowing of the system will go down no matter what other servers and functions are involved. Thus in order to make a robust messaging system it is mandatory to have a fail-over mechanism.

When we have few instances of Traffic Manager servers up and running in the system generally each of these server is having broker. If one broker goes down then gateway automatically switch to the other broker and continue throttle message receiving. If that one also fails it will try next and so on. Thus as a whole system will not have a downtime.

So in order to achieve high availability for data receiving side we need to configure JMSConnectionParameters to connect multiple borocker running within each traffic manager. So for that we need add following configuration to each gateway. If single gateway is communicating with multiple traffic manager this is the easiest way to configure gateway to communicate with multiple traffic managers.


             
TopicConnectionFactory
topic           
org.wso2.andes.jndi.PropertiesFileInitialContextFactory            
amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://127.0.0.1:5673?retries='5'%26connectdelay='50';tcp://127.0.0.1:5674?retries='5'%26connectdelay='50''

Deploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Sending all the events to several receivers

Copy of Copy of traffic-manager-deployment-LBPublisher-failoverReciever.png
This setup involves sending all the events to more than one Traffic Manager receiver. This approach is mainly followed when you use other servers to analyze events together with Traffic Manager servers. You can use this functionality to publish the same event to both servers at the same time. This will be useful to perform real time analytics with CEP and to persist the data, and also to perform complex analysis with DAS in nearly real time with the same data.

Similar to load balancing between a set of servers, in this scenario you need to modify the Data Agent URL. You should include all DAS/CEP receiver URLs within curly braces ({}) separated with commas as shown below.

<DataPublisher>
           <Enabled>true</Enabled>
           <Type>Binary</Type>
<ReceiverUrlGroup>{tcp://127.0.0.1:9612},{tcp://127.0.0.1:9613} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://127.0.0.1:9712},{ssl://127.0.0.1:9713}</AuthUrlGroup>
           <!--AuthUrlGroup>ssl://${carbon.local.ip}:9712</AuthUrlGroup-->
           <Username>${admin.username}</Username>
           <Password>${admin.password}</Password>
           <DataPublisherPool>
               <MaxIdle>1000</MaxIdle>
               <InitIdleCapacity>200</InitIdleCapacity>
           </DataPublisherPool>
           <DataPublisherThreadPool>
               <CorePoolSize>200</CorePoolSize>
               <MaxmimumPoolSize>1000</MaxmimumPoolSize>
               <KeepAliveTime>200</KeepAliveTime>
           </DataPublisherThreadPool>
       </DataPublisher>

Deploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Load balancing events to sets of servers  

Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png


In this setup there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers. An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL.

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

 <DataPublisher>
           <Enabled>true</Enabled>
           <Type>Binary</Type>    <ReceiverUrlGroup>{tcp://127.0.0.1:9612,tcp://127.0.0.1:9613},{tcp://127.0.0.2:9612,tcp://127.0.0.2:9613} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://127.0.0.1:9712,ssl://127.0.0.1:9713}, {ssl://127.0.0.2:9712,ssl://127.0.0.2:9713}</AuthUrlGroup>
           <Username>${admin.username}</Username>
           <Password>${admin.password}</Password>
           <DataPublisherPool>
               <MaxIdle>1000</MaxIdle>
               <InitIdleCapacity>200</InitIdleCapacity>
           </DataPublisherPool>
           <DataPublisherThreadPool>
               <CorePoolSize>200</CorePoolSize>
               <MaxmimumPoolSize>1000</MaxmimumPoolSize>
               <KeepAliveTime>200</KeepAliveTime>
           </DataPublisherThreadPool>
       </DataPublisher>

Deploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Load balancing events to a set of servers

Copy of Copy of traffic-manager-deployment-LBPublisher-failoverReciever(1).png

This setup shows load balancing the event to publish it to all Traffic Manager receivers. The load balanced publishing is done in a Round Robin manner, sending each event to each receiver in a circular order without any priority. It also handles fail-over cases such as, if Traffic Manager Receiver-1 is marked as down, then the Data Agent will send the data only to Traffic Manager Receiver-2(and if we have more node then for all of them) in a round robin manner. When Traffic Manager Receiver-1 becomes active after some time, the Data Agent automatically detects it, adds it to the operation, and again starts to load balance between all receivers. This functionality significantly reduces the loss of data and provides more concurrency.

For this functionality, include the server URL in the Data Agent as a general DAS/CEP receiver URL. The URL should be entered in a comma separated format as shown below.

 <DataPublisher>
           <Enabled>true</Enabled>
           <Type>Binary</Type>
           <ReceiverUrlGroup>tcp://127.0.0.1:9612,tcp://127.0.0.1:9613</ReceiverUrlGroup>
           <AuthUrlGroup>ssl://127.0.0.1:9712,ssl://127.0.0.1:9713</AuthUrlGroup>
           <Username>${admin.username}</Username>
           <Password>${admin.password}</Password>
           <DataPublisherPool>
               <MaxIdle>1000</MaxIdle>
               <InitIdleCapacity>200</InitIdleCapacity>
           </DataPublisherPool>
           <DataPublisherThreadPool>
               <CorePoolSize>200</CorePoolSize>
               <MaxmimumPoolSize>1000</MaxmimumPoolSize>
               <KeepAliveTime>200</KeepAliveTime>
           </DataPublisherThreadPool>
       </DataPublisher>

Deploy multiple traffic managers with fail over data publisher fail over data receiver pattern.


Like we discussed early gateway data receiver need to configure with fail over data receiver. But data publisher can be configured according to load balance or failover pattern. In this section we will see how we can publish throttling events to traffic manager in failover pattern.

Failover configuration

s8YuuoU022qL8M0comFZBpA.png

When using the failover configuration in publishing events to Traffic manager, events are sent to multiple Traffic Manager receivers in a sequential order based on priority. You can specify multiple Traffic Manager receivers so that events can be sent to the next server in the sequence in a situation where they were not successfully sent to the first server. In the scenario depicted in the above image, first events are sent to Traffic Manager Receiver-1. If it is unavailable, then events will be sent to Traffic Manager Receiver-2. Further, if that is also available, then events will be sent to Traffic Manager Receiver-3.


 <DataPublisher>
           <Enabled>true</Enabled>
           <Type>Binary</Type>
           <ReceiverUrlGroup>tcp://127.0.0.1:9612 | tcp://127.0.0.1:9613</ReceiverUrlGroup>
           <!--ReceiverUrlGroup>tcp://${carbon.local.ip}:9612</ReceiverUrlGroup-->
           <AuthUrlGroup>ssl://127.0.0.1:9712 | ssl://127.0.0.1:9713</AuthUrlGroup>
           <!--AuthUrlGroup>ssl://${carbon.local.ip}:9712</AuthUrlGroup-->
           <Username>${admin.username}</Username>
           <Password>${admin.password}</Password>
           <DataPublisherPool>
               <MaxIdle>1000</MaxIdle>
               <InitIdleCapacity>200</InitIdleCapacity>
           </DataPublisherPool>
           <DataPublisherThreadPool>
               <CorePoolSize>200</CorePoolSize>
               <MaxmimumPoolSize>1000</MaxmimumPoolSize>
               <KeepAliveTime>200</KeepAliveTime>
           </DataPublisherThreadPool>
       </DataPublisher>
          
<JMSConnectionParameters>              
<transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
<transport.jms.DestinationType>topic</transport.jms.DestinationType>           
<java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>            
<connectionfactory.TopicConnectionFactory>amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://127.0.0.1:5673?
retries='5'%26connectdelay='50';tcp://127.0.0.1:5674?
retries='5'%26connectdelay='50''</connectionfactory.TopicConnectionFactory>
</JMSConnectionParameters>

Fail over Traffic Manager data receiver pattern for API Gateway.

The idea behind fail over data receiver endpoint is stopping single-point-of-failure in a system. As the broker deployed in each traffic manager and storing and forwarding the messages, if that server goes down entire message flowing of the system will go down no matter what other servers and functions are involved. Thus in order to make a robust messaging system it is mandatory to have a fail-over mechanism.

When we have few instances of Traffic Manager servers up and running in the system generally each of these server is having brocker. If one broker goes down then gateway automatically switch to the other broker and continue throttle message receiving. If that one also fails it will try next and so on. Thus as a whole system will not have a downtime.

So in order to achieve high availability for data receiving side we need to configure JMSConnectionParameters to connect multiple broker running within each traffic manager. So for that we need add following configuration to each gateway. If single gateway is communicating with multiple traffic manager this is the easiest way to configure gateway to communicate with multiple traffic managers.

To configure that you need to add following configuration to each gateway worker. Then it will pick updates from any of the traffic manager event if some of them not functioning.

<JMSConnectionParameters>              
<transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
<transport.jms.DestinationType>topic</transport.jms.DestinationType>           
<java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>            
<connectionfactory.TopicConnectionFactory>amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://127.0.0.1:5673?
retries='5'%26connectdelay='50';tcp://127.0.0.1:5674?
retries='5'%26connectdelay='50''</connectionfactory.TopicConnectionFactory>
</JMSConnectionParameters>

WSO2 API Manager based solutions frequently asked questions and answers for them - 03

Can API Manager audit a source request IP address, and the user who made the request?
Yes, information on the request IP and user is captured and can be logged to a report.

Automation support for API setup and for configuring the API gateway?
Supported, setup can be automated through tools like Puppet. Puppet for common deployment patterns are also available for download.

Capability to run reports on API usage, call volume, latency, etc? 
Supported, API usage, call volume and latency can be reported. However information on caching is not reported.

Logging support and capability to integrate with 3rd party logging module?
By default our products use log4j for logging and if need we can plug custom log appenders. It is possible to push logs to an external system as well.

Billing and payment support & monitoring tools for API usage?
Billing and payment integration support is not available OOTB. But extension points are available to integrate with an external billing and payment systems. Currently WSO2 API Cloud (SaaS offering of WSO2 API Manager) is integrated with payment system successfully. So users can implement required extensions and get the job done.

Capability of message processing cycle control via pre/post processing?
Supported, it is possible to do some pre/post processing of messages based on what is available OOTB, however some pre/post processing would require custom capabilities to be written.

Does it support adapters or connectors to 3rd party systems such as Salesforce etc.
Supported by WSO2 ESB, WSO2 Integration Platform provides connectors to over 130 3rd party systems including Salesforce.com. The entire list of connectors can be accessed and downloaded from the following site.
https://store.wso2.com/store/assets/esbconnector"

Capability of monitoring and enforcing policy (i.e. message intercept).
Supported, it is possible to validate the message against a XSD to ensure that its compliant with a schema definition, it is also possible to audit and log messages and manage overall security by enforcing WS-Security on messages.

Multiple technology database support?
Database connectivity is provided via a JDBC adapter and multiple JDBC adapters can be used at the same-time. It is possible to change from one database technology to another as long as a JDBC adapter is available.

WSO2 API Manager based solutions frequently asked questions and answers for them - 02

Capability to create, manage and deploy both sandbox environments and production environments?
It is possible to manage a Sandbox and a Production environment simultaneously. Each of these environments can have its own API Gateway.

Can deploy application(s)/project(s) from one environment to another environment?
Applications and subscriptions cannot be migrate from one environment to another directly. But a RESTful API is available to get a list of applications and recreate them in another environment. However APIs can be imported/exported from one environment to another OOTB.

Capability to apply throttling to APIs and route the calls for different API endpoints?
Supported, Throttling can be applied to APIs based on a simple throttling rule such as number of requests allowed per minute or based on a complex rule which can consider multiple parameters such as payload size and requests per minute when throttling API calls.
API Gateway can apply throttling policies for different APIs and route the API calls for the relevant back end.

Supports  various versioning including URL, HTTP header, and Query parameter(s)?
API Manager supports URL based versioning strategy. If need we can implement our own

Capability to support API life cycle management including 'create', 'publish', 'block', and 'retire' activities?
API life cycle can be managed by the API Manager. By default it supports Created, Published, Depreciated, Blocked and Retired stages.

Can it manage API traffic by environments (i.e. sandbox, production etc.) and by Gateway?
Supported,Multiple API Gateways can be setup for Sandbox and Production environments to handle traffic of each environment separately. https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways

Does it have throttling limit support?
Supported, Throttling enables users to create more complex policies by mixing and matching different attributes available in the message. Moreover, it supports throttling scenarios based on almost all header details. WSO2 API Manager 2.0 offers more flexibility when it comes to defining rules. In addition, the blocking feature will be very useful as well to protect servers from common attacks and abuse by users.

Provides rate limiting support?
Supported, rate limit support is available in the API Manager.

Capability to horizontally scale traffic in a clustered environment
Supported, a API Manager instance can handle over 3500 transactions per second when its fully optimized.

Support local caching for API responses (i.e. in non-clustered environment or when clustering not activated)?
Supported, it is possible to enable or disable API response caching for each API exposed.

Support distributed caching for API responses amongst nodes within a cluster?
Supported, caching is distributed to all Gateway nodes in the cluster.

Capability of auto-scaling via adding new nodes based on load ( i.e. auto spawning new instances and add to cluster)?
Autoscaling should be supported at underlying infrastructure level. The cluster allows any new node to join or leave the cluster whenever required.

Supports conversion from SOAP to REST?
Supported, SOAP to REST conversion is supported

Supports conversion from XML to JSON and JSON to XML within request and response payloads?
Supported, it is possible to convert the request and payload from XML to JSON and vice versa.
https://docs.wso2.com/display/AM200/Convert+a+JSON+Message+to+SOAP+and+SOAP+to+JSON

Supports redirecting API calls via the rewriting of URLs?
URL rewriting is supported with this it is possible to change final API destination dynamically based on a predefined condition and route requests. It is also possible to define parameterized URLs which can resolve a value in runtime according to an environment variable.

Ability to parse inbound URL for params including query parameters?
Supported, Query parameter, path parameter reading and modifications can be done before a request is sent to the backend.

Visual development, rapid mapping of activities and data?
Visual development and visual data mapper is available to develop the mediation sequences required. The visual development would be done via WSO2 Developer Studio which an Eclipse based IDE.

Custom activity - define custom code with input/output interface?
Supported, Custom code can be written as Java or Java scripts.

Empowering the Future of API Management: Unveiling the Journey of WSO2 API Platform for Kubernetes (APK) Project and the Anticipated Alpha Release

  Introduction In the ever-evolving realm of API management, our journey embarked on the APK project eight months ago, and now, with great a...