Monday, 5 August 2019

Connected retailers get closer to customers


Shoppers are sick and tired of being treated like strangers when they enter a physical store.


The major challenge for retailers is consumers don’t care where, when or how they shop. They just want to be recognized and make a purchase with the minimum amount of fuss.
The retailer that can do this with the least friction and at the best price point will win their business.
In order to achieve this, retailers need to collect the disparate buying signals, the scattered length of the customer’s buying journey, and weave them together to construct a truly personalized experience. The greater the granularity of this consumer data the higher the quality of the experience. These data points can include purchase history, average order value, demographics, browsing history and social media data.
On paper, this may sound relatively easy, but in real life, it can be anything but. This is due to the siloed nature of many retailers’ legacy systems, where shoppers’ purchasing and behavioural data is held prisoner across multiple databases, departments and applications with no simple way of creating a single view of customer or mirroring their channel-agnostic journey. Different systems must not only communicate seamlessly, they must also be aware of business rules associated with each application and service.
There is a solution that can overcome these barriers, however, cutting through this spaghetti-like IT architecture, allowing retailers to get closer to their customers without having to perform open-heart surgery on their systems. More and more tech-savvy retailers are waking up to the benefits of API platforms, for example, that can automatically interrogate disparate apps and databases in real-time, pulling together all the data needed to craft personalized experiences for their customers.

Clienteling – a new name for an old skill

Clienteling is the art of recognizing a customer and knowing their individual needs on a personal level. The benefits for the retailer include higher conversion rates, a higher rate of repeat business, increased customer advocacy and great opportunities to upsell and cross-sell highly relevant products and services. In practice, clienteling involves equipping a customer-facing salesperson with an internet-enabled device so they can access data from the following sources to create actionable insights:
Customer relationship management (CRM) systems
This gives the salesperson a real-time view of the customer’s profile, including purchase history, average order value, online browsing habits, likes and dislikes, wishlists and their loyalty status. Based on this granular information sales staff can make highly-personalized recommendations, nurturing their customer to achieve a sale.
Inventory management system
Once the store associate is in a position to share personalized recommendations, a real-time view of the inventory will show them if the item is in stock on the premises, or if it is available at another store or distribution hub. The customer will then have options of how and when they can complete their purchase – for example, buy in-store, take away or order and have it delivered.
Mobile EPoS
The final stage of clienteling is taking payment quickly and conveniently using mobile EPoS, enabling the customer to avoid queues at the checkout.
To witness slick clienteling in action you need only visit an Apple store. While Apple has a dominant online presence, a trip to its physical stores is still something to look forward to thanks to the tech giant’s mastery of the three stages outlined above.

The case for beacons

Personalization relies on individuals being recognized in real-time and tailored content being sent to them. This is exactly what beacon technology achieves. Push notifications or text messages can be sent to customers phones as they walk through specific areas of a store. They can also be sent as a customer arrives or leaves the premises, or if they simply walk past without entering.
Several big-name brands have already found success with beacon deployments. US department store Lord & Taylor has seen a 60 per cent engagement rate with customers as a result of the beacon technology is installed in one of its Boston locations.
Beacon technology also provides data-rich back-office consumer Intelligence, such as how long customers stopped at personalized beacon-enabled displays, the relationship between beacon-enabled sales offers and actual sales, and similar information that can be analysed to adjust offers as well as staffing and placement of sales associates. These insights enable retailers to drive better revenue and profits from bricks and mortar locations.

Thursday, 14 April 2016

How to create a SOA 12.2.1 docker image on OracleLinux


Introduction

What is docker?

Unless you’ve been living without internet access for the last two years, it would be hard not to at least heard of Docker. But, as an emerging technology not everyone has taken the time to work out what Docker is, where it fits in and how it can benefit you.

So, what exactly is Docker? Here’s what Docker themselves describe it as:

    Docker is an open platform for developers and sysadmins of distributed applications.

Essentially, Docker is a container based system for your applications. If you’re used to the concept of virtual servers, Docker provides further levels of abstraction for your application. Here’s a visual representation of how it differs:

VM vs Containers - Docker


Rather than just being one part of the puzzle, Docker provides a number of components and tools to assist with the entire lifecycle management of the application. This includes the container environment, image management and orchestration.

Docker started it’s life as an internal project within a hosting company called dotCloud, but quickly took off once they open sourced it in early 2013. Since then, it's benefited from over 15,000 software commits from over 900 contributors.

Why use Docker?

Now that you have a basic understanding of Docker, there are a number of great reasons to start using it.
  • It’s very fast. Start a Docker container can be complete in as little as 50ms. That’s not a typo, it really can be this quick! This is the advantage of having such high levels of abstraction, you reduce the number of components you need to run. This also means that there's very little to no overhead in it's implementation.
  • One command deployments . It really is as simple as installing an application with one line. Want to install MySQL? One command. Want WordPress, MySQL, Nginx and Memcache all installed and configured? Yep, it’s one command.
  • Pre-configured Apps. At last count, there were over 13,000 applications already packaged as a Docker image. Chances are, if you’re using a common application then most of the initial work has already been done for you. But, it doesn't end there. You can take the existing image, make your own changes and push it to your own repository for ease of re-deployment.
  • Resource Isolation. Previously, if you ran all of your services on the one server then there’s a chance one of them could exhaust all of the server resources. Docker allows you to set, monitor and adjust these on a per application or service basis.
  • Consistency. Docker really is the “write once deploy anywhere” type environment. It removes all of the hassles going from a development to a production environment or similar. Each set of libraries is very tightly coupled to the docker image to ensure the consistency.
  • A complete platform. Rather than just being one part of the puzzle, Docker is shaping up to be a complete platform. There’s the base Engine for the containers, the Registry for image management, Compose to orchestrate complex deployments, Swarm for clustering and Machine for provisioning. This is what’s made Docker different from other container implementations, you can manage the entire lifecycle quite easily.
  • Scale. This is is one area where Docker really shines, especially if you have a micro-service based application. Compose and Swarm assist with deploying scalable applications and there’s then third party tools like Kubernetes and Mesos, which both take it to the next level. We’re talking the ability to manage the entire lifecycle with up to millions of containers, so scale isn’t a problem!
Moreover Oracle is now quite focused on certifying its Oracle Fusion Middleware for Docker. Details are here. Also below is the current status of certification Docker related:

Oracle Linux x64 Oracle Linux 6(UL5+)
Oracle Linux 7(UL0+)
Red Hat Enterprise Linux 7(UL0+)
Docker Containers
  • Oracle WebLogic Server 12c (12.2.1) (Oracle Linux 6 (UL6+) is required. Refer to Note 6)
  • Oracle WebLogic Server 12c (12.1.3) (Refer to Note 4)


How can docker be useful to us?

Docker could  be  particularly useful for the following reasons;
  • More reliable releases
  • Enable continuous delivery
  • Recreate environments
  • Internal tools

Docker Installation

This guide below assumes the operating system is Oracle Linux 7.
Oracle provides a very good guide available here on how to install and configure Docker on OL7 (it is also available for OL6). I will just post here the main steps.

Operating systems pre-requirement

Docker version 1.9 and later require that you configure the system to use the Unbreakable Enterprise Kernel Release 4 (UEK R4) and boot the system with this kernel.

  • Disable access to the ol7_x86_64_UEKR3 channel and enable access to the ol7_x86_64_UEKR4 channel.

If you use Oracle Public Yum, disable the ol7_UEKR3 repository and enable the ol7_UEKR4 repository in the /etc/yum.repos.d/public-yum-ol7.repo file, for example:

[ol7_UEKR3]
name=Latest Unbreakable Enterprise Kernel Release 3 for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum-qa.oracle.com/repo/OracleLinux/OL7/UEKR3/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=0

[ol7_UEKR4]
name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum-qa.oracle.com/repo/OracleLinux/OL7/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

You can download the latest version of this file from http://public-yum.oracle.com/public-yum-ol7.repo.

  • Run the following command to upgrade the system to UEK R4:

# yum update

It is also recommended to make UEK R4 the default boot kernel, see Section 4.3, “About the GRUB 2 Boot Loader” at https://docs.oracle.com/cd/E52668_01/E54669/html/ol7-grub2_bootloader.html.

  • Reboot the system, selecting the UEK R4 kernel (version 4.1.12) if this is not the default boot kernel.

# systemctl reboot

  • Enable the ol7_addons repository in the /etc/yum.repos.d/public-yum-ol7.repo file, for example:

[ol7_addons]
name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL7/addons/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

Docker engine Installation

  • Install the docker-engine package.

# yum install docker-engine

By default, the Docker Engine uses the device mapper as a storage driver to manage Docker containers. 
As with LXC, there are benefits to using the snapshot features of btrfs instead.

  • Configure Docker images to be saved into btrfs  filesystem (optional but strongly recommended)
Oracle recommends using btrfs because of the stability and maturity of the technology. If a new device for btrfs is not available, you should use overlay as the storage driver instead of device-mapper for performance reasons. You can configure overlay by adding the --storage-driver=overlay option to DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage. The overlayfs file system is available with UEK R4.
For more information, see https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/.

To configure the Docker Engine to use btrfs instead of the device mapper:

Use yum to install the btrfs-progs package.

# yum install btrfs-progs

If the root file system is not configured as a btrfs file system, create a btrfs file system on a suitable device such as /dev/sdb in this example:

In the case you are using Virtual Box (or any virtual machines tool) this is quite easy, just follow the steps below (details are available here):

1. Add Disk Storage to Oracle Virtual Box using the tool (make sure it's big! I used a 50 GB one)

2. Turn on the VirtualBox OL7 machine and switch to the root user.

3. Then use fdisk utility to get list of hard drives.
$ fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e4833

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux Partition 1 does not end on cylinder boundary.
/dev/sda2              64       13055   104344576   8e  Linux LVirtualBox

Disk /dev/sdb: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

.....

From the first block you can clearly see that we have two hard drives i.e. /dev/sda and /dev/sdb. 
You can also see that /dev/sda has been formatted and mounted using logical volume management. 
This was surely done when you created the VirtualBox or when the template was created, as it is the default way Linux formats the hard drives.

4. We need to partition the /dev/sdb device. Use the following command to partition the device.

fdisk /dev/sdb 

Follow the screen instructions as shown below.
# fdisk /dev/sdb
[/sociallocker] Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2e3c77cd.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p

Partition number (1-4): 1
First cylinder (1-1566, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1566, default 1566):
Using default value 1566

Command (m for help): w

The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks. 
We have used ‘n’ to create a new partition, p to create the partition as primary, 1 to specify the partition number and used defaults for begin and end range for partition size.

Finally ‘w’ to make changes permanent.

5. Query the partition table again to see now that /dev/sdb1 partition has been created into /dev/sdb disk.

# fdisk -l

6. Format the Partition using the file system (btrfs) recommended

mkfs.btrfs /dev/sdb1

7. Use the blkid command to display the UUID of the device and make a note of this value, for example:

# blkid /dev/sdb1
/dev/sdb: UUID="a7dc508d-5bcc-4112-b96e-f40b19e369fe" \
  UUID_SUB="1aa666eb-0861-4dc8-a37e-f3c87c7003b8" TYPE="btrfs" 

8. Create the file /etc/systemd/system/var-lib-docker.mount with the following contents:

[Unit]
Description = Docker Image Store

[Mount]
What = UUID=REPLACE_HERE_THE_UUID_value
Where = /var/lib/docker
Type = btrfs

[Install]
WantedBy = multi-user.target


This file defines the target that systemd uses to mount the file system on /var/lib/docker.

9. Enable the var-lib-docker.mount target.

# systemctl enable var-lib-docker.mount
ln -s '/etc/systemd/system/var-lib-docker.mount' \
  '/etc/systemd/system/multi-user.target.wants/var-lib-docker.mount'

This command enables systemd to mount the file system when required for use with the docker service. It does not mount the file system. If you need to mount the file system independently of Docker, use the following command:

# systemctl start var-lib-docker.mount

Create the drop-in file /etc/systemd/system/docker.service.d/var-lib-docker-mount.conf, which contains the following lines:

[Unit]
Requires=var-lib-docker.mount
After=var-lib-docker.mount

These entries tell systemd to mount the /var/lib/docker file system by using the var-lib-docker.mount target before starting the docker service.

After doing this restart the OS and make sure Docker restart correctly.

Moreover running the command docker info you can check the file system configuration is now applied:
Containers: 0
Images: 0
Server Version: 1.9.1
Storage Driver: btrfs
 Build Version: Btrfs v3.19.1
 Library Version: 101
Execution Driver: native-0.2

Problems:

In my test, Docker did not restart, so I checked at the log using the command below
more /var/log/messages | grep docker

I saw this:
SELinux is not supported with the BTRFS graph driver!

Googling the issue I just found a workaround to this problem (not sure for PROD environment if acceptable)

Edit the file 
vi /etc/sysconfig/docker

Removing the selinux-enabled from the options as below:
OPTIONS=''
#'--selinux-enabled'

Restart Docker!
# systemctl daemon-reload
# systemctl restart docker

Docker Post-Installation steps

  • Start the docker service and configure it to start at boot time.
# systemctl start docker
    # systemctl enable docker
      ln -s '/etc/systemd/system/docker.service' \
          '/etc/systemd/system/multi-user.target.wants/docker.service'
          systemctl status docker
            Useful commands to check Docker status:

            # docker info
            # docker version
            • Enabling Non-root Users to Run Docker Commands

            Create the docker group:

            # groupadd docker
            Restart the docker service:

            # service docker restart
            The UNIX socket /var/run/docker.sock is now readable and writable by members of the docker group.

            Add the users that should have Docker access to the docker group:

            # usermod -a -G docker user1

            SOA Image Creation

            To create the Docker SOA 12.2.1 image I used this blog post here, also the DockerFile I used are available here.
            Therefore make sure you execute the following steps:
            1. open a terminal in the VM and suso as a Root, then
              1. mkdir /fmw/soa
              2. cd /fmw/soa
              3. mkdir installers
            2. Copy into installers the following files:
              1. fmw_12.2.1.0.0_infrastructure_Disk1_1of1.zip
              2. fmw_12.2.1.0.0_soaqs_Disk1_1of2.zip
              3. fmw_12.2.1.0.0_soaqs_Disk1_2of2.zip
              4. jdk-8u77-linux-x64.gz
              5. silent.rsp (download it from the blog page as above and customize)
            3. Copy into fmw/soa the following file:
              1. Dockerfile.12.2.1.0.0 (download it from the blog page as above and customize)
            4. Change directory to /fmw/soa and execute the below command:
              1. docker build -t oracle/fmw:12.2.1.0.0 -f ./Dockerfile.12.2.1.0.0 .
            5. Run the container just created with the folowing
              1. docker run -i -P --name=oracle-soa-1 -t oracle/fmw:12.2.1.0.0
            6. Accessing to the machine please verify everything has been installed successfully (you could tsart Jdev following the procedure here below at export via X11)

            Oracle XE Image Creation

            To create the Docker Oracle XE 11 image I used the Docker file here.
            Therefore make sure you execute the following steps:
            1. open a terminal in the VM and suso as a Root, then
              1. mkdir /database/
              2. cd /database/
              3. mkdir xe 
            2. Copy into xe the following files:
              1. Dockerfile11.2.0.1.0 (download it from the blog post above)
              2. oracle-xe-11.2.0-1.0.x86_64.rpm.zip 
            3. Change directory to /database and execute the below command:
              1. docker build -t oracle/db/xe:11.2.0.1.0 -f ./xe/Dockerfile.11.2.0.1.0 .
            4. Run docker
              1. docker run -i -P --name=oracle-db-xe-1 -t oracle/db/xe:11.2.0.1.0
            5. Test the Database is up and running connecting via SQL Developer o similar tools (you can connect from any place is the same subnet, check the external port by running docker ps -a)

            Possible Problems

            If you have tried to create Docker containers for Oracle Database XE, you might not be able to configure/start a database instance successfully. The installation may complete successfully but when you try to configure or start an instance, you might notice the following error in the logs.

            ORA-01034: ORACLE not available
            ORA-27101: shared memory realm does not exist
            Linux-x86_64 Error: 2: No such file or directory
            ORA-00845: MEMORY_TARGET not supported on this system

            Reason might be related to the Docker version used. Oracle Database needs a shared memory [/dev/shm] available of at least 1 GB, while any Docker container (up to 1.9 version) is 64MB.

            Some blog post suggest to get the latest of docker (10.X) and make use of the new option --shm-size=2g:

            cat >/etc/yum.repos.d/docker.repo <<-EOF
            [dockerrepo]
            name=Docker Repository
            baseurl=https://yum.dockerproject.org/repo/main/oraclelinux/7
            enabled=1
            gpgcheck=1
            gpgkey=https://yum.dockerproject.org/gpg
            EOF

            Then make sure docker engine is update running the below
            yum install docker-engine

            Other approach (like the one I used just avoid this issue adding the following in the Docker file:

            # Work around sysctl limitation of docker
            RUN sed -i -e 's/^\(memory_target=.*\)/#\1/' /u01/app/oracle/product/11.2.0/xe/config/scripts/initXETemp.ora \
                && sed -i -e 's/^\(memory_target=.*\)/#\1/' /u01/app/oracle/product/11.2.0/xe/config/scripts/init.ora

            Also creating the XE server database FROM oraclelinux:7 (6 does not work)

            SOA Domain creation

            In order to create a domain, first of all the SOA docker container needs to be able to talk with the Database one. Docker by default allows containers to see each other however it is recommended to have different network.
            Good practice is to create sub-network in the Docker engine. This can be performed with the following steps:
            1. docker network create my-soa-network
            2. Run the container just created with the following (here adding -u=0 the container will be accessed as a ROOT user, which will allow to install telnet for testing the connection to the database "yum install telnet") 
              1. docker run -i -P --net=my-soa-network --name=oracle-soa-1 -t oracle/fmw:12.2.1.0.0
            3. Run in another terminal
              1. docker run -i -P --net=my-soa-network --name=oracle-db-xe-1 -t oracle/db/xe:11.2.0.1.0
            Second step is to create the SOA Db Schemas using the RCU, to do that use ifconfig to get the DB IP, then:
            1. In the SOA Docker container creates a file "passwd.txt" with 1 password for each database user that RCU is going to create. For the SOA Domain 7 password are needed.
            2. Then execute the following:
              1. rcu -silent -createRepository -connectString <DB IP>:1521:XE -databaseType ORACLE -dbUser SYS -dbRole SYSDBA -schemaPrefix <YOUR PREFIX> -variables SOA_PROFILE_TYPE=SMALL -component SOAINFRA -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -component MDS -component WLS -component UCSUMS -f < passwd.txt
            Last step is to create the domain using WLST script
            1. Run the following wlst Script using you r enrionment details
              1. WLST

            Tips

            Display export via X11

            Sometimes it is nice to have the UI.
            Docker and the OracleLinux template by default does not support any UI as it supposed to be a lightweight system.
            However exporting the display it is easy and can be done with the following steps:
            1. Run the image with the option -u (user root) as below
              1.  docker run -u 0 -it --name soa1221 oracle/fmw:12.2.1.0.0
            2. Install wget and download and install the X11 rpm packege
              1. yum install wget
              2. wget http://vault.centos.org/6.2/os/x86_64/Packages/xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64.rpm
              3. yum localinstall xorg-x11-server-Xvfb-1.10.4-6.el6.x86_64.rpm
            3. Change user to oracle, export the DISPLAY and run Jdev
              1. su oracle
              2. export DISPLAY=192.168.1.220:1.0   (assuming this is the target IP where you'd like to UI to appear, of course this reuqires an X11 server, in my case I used MobaXTerm, a very simple one)
              3. Run Jdeveloper or any other tool with UI

            SSH Server

            In case the SSH Serer is needed the below command can be added to the DockerFile creation, also the container needs to be started with this option -p 22

            RUN yum -y install openssh-server

            However opening the ssh communication is not recommended while it is much more secure to use the docker EXEC <container> bash command.

            Thursday, 28 January 2016

            Connect and consume data assets with OSB12c and WebCenter Sites 11g using the REST api


            In one of the project I've worked on, I configured an automatic creation/update and delete of assets in WebCenter Sites 11g using its REST api via OSB.

            The configuration is a bit tricky so I want to share the solution.

            I am not giving the details step by step of how this can be implemented as I am sharing the code, btw I'll explain the main important concept.

            What it is needed for this tip:

            • JDeveloper 12.1.3 (SOA Quick start version)
            • An account with read/write right permission in a WebCenter Sites server
            • The AssetType created in Sites

            OSB Services Implemented

            The Pipeline in the project contains the below services:

            XSLT NameWCSites Component used Relative URIHTTP Operation
            getTicketAuthorizationcas/v1/ticketsGET 
            getCourseIdByCodeCustom Asset Course<SITE NAME>/types/<ASSET NAME>/searchGET
            getSessionIdByCodeCustom Asset Session<SITE NAME>/types/<ASSET NAME>/searchGET
            getContentIdByNameAsset Generic<SITE NAME>/types/<ASSET NAME>/searchGET
            getParentIdByIdAsset Generic<SITE NAME>/types/<ASSET NAME>/assetsGET
            createUpdateCourseAsset Course<SITE NAME>/types/<ASSET NAME>/assetsPOST
            createUpdateCourseParentAsset Generic<SITE NAME>/types/<ASSET NAME>/assetsPOST
            createUpdateSessionCustom Asset Session<SITE NAME>/types/<ASSET NAME>/assetsPOST
            isCourseSessionReadyAsset Course<SITE NAME>/types/<ASSET NAME>/assetsGET
            isSessionDifferentCustom Asset Session<SITE NAME>/types/<ASSET NAME>/assetsGET
            deleteSessionCustom Asset Session<SITE NAME>/types/<ASSET NAME>/assetsDELETE

            Below are some important point to highlight:

            • To get an authentication token for WcSites 11 the endpoint http://HOST:PORT/cas/v1/tickets has to be called twice. First time with username and password in the body of a POST, while the second time with the ticket in the Url of the request. Please note that this "service" does not return XML, but HTML therefore this has to be parsed with OSB.
            • All the http POST/DELETE methods on WcSites must be called with the parameter Multiticket in the url
            <http:parameter name="multiticket" value="<ticket>">
            </http:parameter>
            • The Search method uses an url parameter as below. Please note the search against a specific parameters has to be enabled in Sites first:
            <http:parameter name="field:name:equals" value="<CODE VALUE>">
            </http:parameter>

            Possible problems

            • In case the REST api invocation fails with this:
            "OSB-38000 BAD Gateway "

            Then uncheck in the OSB Business Service the Chunked mode and redeploy it!


            • In case the WebService invocation fails with this:
            "MOVED TEMPORALLY"

            Then the credential account is not configured correctly in OSB or the process can't see it. Review it!
            Tip: By default, if an authorization failure occurs, the login page for Central Authentication Service (CAS) is displayed. If you want to receive a 500 error instead, add auth-redirect=false to the URL when making the request.

            Please find the REST api documentation here!
            The source code of the OSB pipeline created here

            Wednesday, 11 November 2015

            Oracle ServiceCloud Rightnow Integration, XSLT Transformations!

            I've just roll out to a live environment, a SOA Integration project with Oracle Service Cloud Rightnow.

            The customer needed to migrate from a in-house CRM to Oracle Service Cloud and with my company Infomentum we have helped them in taking this big step. Since that I have made lots of experience with OSC WebServices.

            Here I just want to share the complex XSLT Transformation which we have implemented to communicate with the OSC WebServices, hopefully these can speed up any other SC integration projects.

            There are 6 transformation in the ZIP package (we have implemented more):

            XSLT NameSC ObjectOut of the box Object?Operation Type
            xsltContact2UpdateCONTACTYesUPDATE
            xsltOrganisationToUpdateORGANIZATIONYesUPDATE
            xsltProgrammeToUpdateCO.PROGRAMMENoUPDATE
            xsltProgrammeTypeToUpdateCO.PROGRAMMETYPENoUPDATE
            xsltCourseToUpdate2CO.COURSENoUPDATE
            xsltSessionToUpdateCO.SESSIONNoUPDATE

            In the XSLTs you'll find all the details about the TARGET columns (Oracle Service Cloud ones).

            Here are some important concepts I want to highlight:

            • SC Columns in the XSLT are sometimes out of the box column, in some other cases they are custom ones. In the XSLT the latter will be identified with the tag GenericFields.

                         <rng_v1_2:GenericFields dataType="OBJECT" name="C">
                            <rng_v1_2:DataValue>
                              <rng_v1_2:ObjectValue xsi:type="rng_v1_2:GenericObject">
                                <rng_v1_2:ObjectType>
                                  <rng_v1_2:TypeName>ContactCustomFieldsc</rng_v1_2:TypeName>
                                </rng_v1_2:ObjectType>
                                <rng_v1_2:GenericFields dataType="BOOLEAN" name="yp_surveydeclined_mail_bol">
                                    <rng_v1_2:DataValue>
                                      <rng_v1_2:BooleanValue>
                                        <xsl:value-of select="/ns0:contact/ns0:isMailOptionSur"/>
                                      </rng_v1_2:BooleanValue>
                                    </rng_v1_2:DataValue>
                                </rng_v1_2:GenericFields>
                              </rng_v1_2:ObjectValue>
                            </rng_v1_2:DataValue>
                          </rng_v1_2:GenericFields>

            • Custom fields can be in the subPackage C or CO (you'll find this package in the attribute NAME of the GenericFields tag). The former is used for custom fields, the latter for custom object relationship fields. In this case the data type might be something like dataType="NAMED_ID", which means this is related to a complex object

                      <rng_v1_2:GenericFields dataType="OBJECT" name="CO">
                        <rng_v1_2:DataValue>
                          <rng_v1_2:ObjectValue xsi:type="rng_v1_2:GenericObject">
                            <rng_v1_2:ObjectType>
                              <rng_v1_2:TypeName>ContactCustomFieldsc</rng_v1_2:TypeName>
                            </rng_v1_2:ObjectType>
                            <rng_v1_2:GenericFields dataType="NAMED_ID" name="all_addr_region_lst">
                                <rng_v1_2:DataValue>
                                  <rng_v1_2:NamedIDValue>
                                    <rnb_v1_2:Name>
                                      <xsl:value-of select="/ns0:contact/ns0:Region"/>
                                    </rnb_v1_2:Name>
                                  </rng_v1_2:NamedIDValue>
                                </rng_v1_2:DataValue>
                            </rng_v1_2:GenericFields>
                          </rng_v1_2:ObjectValue>
                        </rng_v1_2:DataValue>
                      </rng_v1_2:GenericFields>

            • In order to blank any field in SC the client must pass the attribute xsi:nil="true" in the attribute tag or in the DataValue tag for the custom field.

                      <rng_v1_2:GenericFields dataType="OBJECT" name="CO">
                        <rng_v1_2:DataValue>
                          <rng_v1_2:ObjectValue xsi:type="rng_v1_2:GenericObject">
                            <rng_v1_2:ObjectType>
                              <rng_v1_2:TypeName>ContactCustomFieldsc</rng_v1_2:TypeName>
                            </rng_v1_2:ObjectType>
                            <rng_v1_2:GenericFields dataType="NAMED_ID" name="all_addr_region_lst">
                                <rng_v1_2:DataValue xsi:nil="true">
                                  <rng_v1_2:NamedIDValue>
                                    <rnb_v1_2:Name></rnb_v1_2:Name>
                                  </rng_v1_2:NamedIDValue>
                                </rng_v1_2:DataValue>
                            </rng_v1_2:GenericFields>
                          </rng_v1_2:ObjectValue>
                        </rng_v1_2:DataValue>
                      </rng_v1_2:GenericFields>

            • PhoneList and EmailList attribute needs to be managed via the ACTION (update, add, remove) attribute in the XSLT and via the TYPE ID (the ID of the Phone, since they might be multiple, like Mobile1, Work, Landline, etc, same applies to the Email)

                       <rno_v1_2:Phones>
                          <rno_v1_2:PhoneList action="update">
                            <rno_v1_2:Number>
                              <xsl:value-of select="/ns0:contact/ns0:TelWork"/>
                            </rno_v1_2:Number>
                            <rno_v1_2:PhoneType>
                              <rnb_v1_2:ID id="{0}"/>
                            </rno_v1_2:PhoneType>
                          </rno_v1_2:PhoneList>


                          <rno_v1_2:PhoneList action="remove">
                            <rno_v1_2:PhoneType>
                              <rnb_v1_2:ID id="{1}"/>
                            </rno_v1_2:PhoneType>
                          </rno_v1_2:PhoneList>

                          <rno_v1_2:PhoneList action="remove">
                            <rno_v1_2:PhoneType>
                              <rnb_v1_2:ID id="{2}"/>
                            </rno_v1_2:PhoneType>
                          </rno_v1_2:PhoneList>



            Below is an easy transformation used for the Programme CustomObject Update

                <ns1:Update>
                  <ns1:RNObjects xsi:type="rng_v1_2:GenericObject">
                    <rnb_v1_2:ID id="{$InvokeGetProgramme_QueryCSV_OutputVariable.parameters/ns1:QueryCSVResponse/ns1:CSVTableSet/ns1:CSVTables/ns1:CSVTable/ns1:Rows/ns1:Row}"/>
                    <rng_v1_2:ObjectType>
                      <rng_v1_2:Namespace>CO</rng_v1_2:Namespace>
                      <rng_v1_2:TypeName>Programme</rng_v1_2:TypeName>
                    </rng_v1_2:ObjectType>
                    <rng_v1_2:GenericFields dataType="INTEGER" name="all_soa_totcodeid_int">
                      <rng_v1_2:DataValue>
                        <rng_v1_2:IntegerValue>
                          <xsl:value-of select="/ns0:programme/ns0:CodeID"/>
                        </rng_v1_2:IntegerValue>
                      </rng_v1_2:DataValue>
                    </rng_v1_2:GenericFields>
                    <rng_v1_2:GenericFields dataType="STRING" name="all_all_name_txt">
                      <rng_v1_2:DataValue>
                        <rng_v1_2:StringValue>
                          <xsl:value-of select="/ns0:programme/ns0:Description"/>
                        </rng_v1_2:StringValue>
                      </rng_v1_2:DataValue>
                    </rng_v1_2:GenericFields>
                    <rng_v1_2:GenericFields dataType="BOOLEAN" name="all_all_deleted_bol">
                      <rng_v1_2:DataValue>
                        <rng_v1_2:BooleanValue>
                          <xsl:value-of select="/ns0:programme/ns0:Deleted"/>
                        </rng_v1_2:BooleanValue>
                      </rng_v1_2:DataValue>
                    </rng_v1_2:GenericFields>
                    <rng_v1_2:GenericFields name="all_soa_modified_dt" dataType="DATETIME">
                      <rng_v1_2:DataValue>
                        <rng_v1_2:DateTimeValue>
                          <xsl:value-of select="ptutlGmt:getCurrentDateInGMT(string(/ns0:programme/ns0:SOA_ModifiedDate))"/>
                        </rng_v1_2:DateTimeValue>
                      </rng_v1_2:DataValue>
                    </rng_v1_2:GenericFields>
                    <rng_v1_2:GenericFields dataType="BOOLEAN" name="all_all_fromsoa_bol">
                      <rng_v1_2:DataValue>
                        <rng_v1_2:BooleanValue>1</rng_v1_2:BooleanValue>
                      </rng_v1_2:DataValue>
                    </rng_v1_2:GenericFields>
                  </ns1:RNObjects>
                </ns1:Update>

            Please download the transformation from here!

            Also find here more information about this challenging project:
            http://www.computing.co.uk/ctg/news/2433947/princes-trust-opts-for-oracle-for-digital-transformation
            http://www.infomentum.com/uk/about-us/media-centre/news/princes-trust-golive

            Let me know for any issues!!!




            Tuesday, 28 April 2015

            Connect and consume data with Oracle RightNowCX using the new SOA12c RightNow adapter


            The  Oracle RightNow adapter has been released for the SOA 12.1.3 just couple of months ago, and I tested as soon as I've heard of it!

            What it is needed for this tip

            • JDeveloper 12.1.3
            • An account with read right on the WebServices exposed by an Oracle Rightnow instance

            Before starting!

            Make sure the following patch bundle has been applied to your SOA/jdev Home.

            Bundle Patch for Bug: 20423408

            The patch can be downloaded from Oracle support of course, and installed using opatch apply.
            The patch must be applied to both, the SOA Server home (if not in the jdev home) and Jdev home, since the new plugin which will shows the RightNow adapter wizard, must be configured into JDeveloper.
            Remember to perform the post-installation steps (patch READ-ME for details):


            1. Log in to Fusion Middleware Control Enterprise Manager.
            2. Expand "Weblogic Domain" in the left panel
            3. Right click on the domain you want to modify and select Security > System Policies to display the page System Policies.
            4. In the System Policies page, expand "Search". For "Type" select "Codebase", for "Includes" enter "jca" and click the arrow button.
            5. Select "jca-binding-api.jar" in the search returned result and click "Edit".
            6. In the "Edit System Grant" page, click on "Add".
            7. In the "Add Permission" page, click on "Select here to enter details for a new permission" and enter the following:
            • Permission Class:oracle.security.jps.service.credstore.CredentialAccessPermission
            • Resource Name: context=SYSTEM,mapName=SOA,keyName=*
            • Permission Action: *
            8. Click on "OK" to save the new permission.


            In order to verify the installation went well please double check you'got the RightNow Adapter in the Cloud Adapter Component palette:



            After the installation start JDeveloper with the option "jdev.exe -clean"

            Hands on!

            • Generate one SOA application with an empty SOA project.
            • Drag and drop an Oracle Rightnow component in the External reference lane and the wizard will pop up
            • Insert the WSDL url (the WSDL and XSD schemas will be downloaded), also create a new csf key in Jdeveloper with your username and password for Oracle Rightnow. 


            • Select the Create WSDL operation and the Contact business object



            • Click FINISH
            • Now add a WIRE from the BPEL process to the Rightnow adapter
            • Edit the composite input XSD adding the following fields:
            <element name="process">
            <complexType>
            <sequence>
            <element name="Name" type="string"/>
            <element name="LastName" type="string"/>
            <element name="Address" type="string"/>
            <element name="PostCode" type="string"/>
            </sequence>
            </complexType>
            </element> 

            • Open the BPEL process and add an invoke to the adapter (create input and output variables)
            • Add a transformation which will be used to set properly the invoke input variable for the Rightnow adapter.
            • Edit the transformation as showed below. The left variable is the composite input variable, while the right one is the Rightnow input one 

            • So the BPEL process will look like this below
            • Deploy it top the integrated SOA Server
            • Last step, configure the username and password in the EM console using the CSF key MyRightNowUser configured in the JCA adapter in WeblogicDomain => Security => Credentials in the Credential Map SOA, as showed below



            The SOA Composite can now be tested and hopefully an account will be created in Oracle Rightnow SC!
            Let me know for any issue (see Possible problems below)!

            Tip: instead of configuring the credentials in the SOA-EM, the csfkey property parameter in the jca-properties definition can be omitted and replaced with username and password parameters

            Possible problems

            • In case the WebService invocation fails with this:
            "Exception occurred during invocation of JCA binding: "JCA Binding execute of Reference operation 'Create' failed due to: Unable to create Cloud Operation:
            The invoked JCA adapter raised a resource exception.
            Please examine the above error message carefully to determine a resolution "

            Then replace in the Rightnow JCA the targetWSDLURL local WSDL with the remote one, as explained here and redeploy it!


            • In case the WebService invocation fails with this:
            "Exception occurred during invocation of JCA binding: "JCA Binding execute of Reference operation 'Create' failed due to: Client received SOAP Fault from server : Username is not specified in UsernameToken."

            Then the credential key is not configured correctly in the credential store or the process can't see it. Review it!


            • In case the WebService invocation fails with this:
            "Exception occurred during invocation of JCA binding: "JCA Binding execute of Reference operation 'Create' failed due to: Client received SOAP Fault from server : Access Denied."

            Then the user-password are not correct in the credential key. Review it!


            Please find the RightNow Cloud Service Adapter documentation here!
            The source code of the composite created here

            Wednesday, 22 April 2015

            Endeca Guided Search Installation and Deployments

            Endeca Guided Search - Pre-requisites

            In order to setup Oracle Endeca Commerce on RHL the following requirements are needed:
            • An oracle user
            • A folder, owned by the oracle user, for the product binaries, possibly in the file-system root (in this how-to we are assuming that the main installation folder is set to /u01/oracle).
            • A static ip-address and the hostname mapped in the /etc/hosts file.
            Is also necessary download the installation packages from the oracle edelivery cloud for the Linux operating system. The packages to download are:
            • Oracle Endeca MDEX engine.
            • Oracle Endeca PlatformServices.
            • Oracle Endeca Content Acquisition System.
            • Oracle Endeca ToolsAndFrameworks.
            Here the OS requirement:
            • DOS2UNIX package has to be installed if not: yum install dos2unix 

            Endeca Guided Search - Server Installation

            Step 1 - Install the Oracle Endeca MDEX engine

            Unzip the Oracle Endeca MDEX package and run the follow command (replace the ? with the version of the file in use):
            chmod +x mdex?????.sh
            ./mdex???????.sh --target /u01/oracle
            At the end of the process instead running the script as suggested open it and copy and paste the variables declaration into the .bash_profile file then reload the profile. Run the follow command to check that the MDEX root folder has been set in the environment
            echo $ENDECA_MDEX_ROOT
            Create an apps folder under the main Endeca installation folder.

            Step 2 - Install the Oracle Endeca PlatformServices

            Unzip the Oracle Endeca PlatformServices package and run the follow command (replace the ? with the version of the file in use):
            chmod +x platformservices?????.sh
            ./platformservices?????.sh --target /u01/oracle
            During the setup process provide:
            • The Endeca HTTP service port, by default is set to 8888 (use this port if not already binded by other processes).
            • The Endeca HTTP shutdown service port, by default is set to 8090 (use this port if not already binded by other processes).
            • The Endeca Control System JCD port , by default is set to 8088 (use this port if not already binded by other processes).
            • Type Y to install the EAC agent.
            • The MDEX full path included with the version number (ie. /u01/oracle/endeca/MDEX/6.5.0)
            • Type Y to install reference implementations.
            At the end of the process, as for the MDEX engine, instead running the script as suggested open it and copy and paste the variables declaration into the .bash_profile file then reload the profile. Run the follow command to check that the ENDECA_ROOT has been set in the environment
            echo $ENDECA_ROOT

            Step 3 - Install the Oracle Endeca Tools And Framework

            Unzip the Oracle Endeca Tools and Frameworks package and run the runInstaller.sh file located under the install sub-directory in the Disk1 folder. Then follow the steps below:
            1. Click Next on the welcome page
            2. Accept the licence agreement
            3. Specify inventory directory and credential and click Next
            4. Specify installation type, for development machine install the Full version as it contains reference
            5. application, then click Next
            6. Specify home details and click Next
            7. Provide administrator password for Workbench console and click Next
            8. Review installation options and click on Install
            9. Run the inventory script as root before closing the window
            10. Click on Exit
            At the end of the process installation process is necessary, in order to proceeed with the Content Acquisition System setup, define in the bash_profile the following two variables (path could change) and reload the profile:
            ENDECA_TOOLS_ROOT=/u01/oracle/endeca/ToolsAndFramework/11.0.0
            export ENDECA_TOOLS_ROOT
            ENDECA_TOOLS_CONF=/u01/oracle/endeca/ToolsAndFramework/11.0.0/server/workspace
            export ENDECA_TOOLS_CONF

            Step 4 - Install the Oracle Endeca Content Acquisition System

            Unzip the Oracle Endeca Content Acquisition System package and run the follow command (replace the ? with the version of the file in use):
            chmod +x cas?????.sh
            ./cas?????.sh --target /u01/oracle
            During the setup process provide:
            • The Endeca CAS service port, by default is set to 8500(use this port if not already binded by other processes).
            • The Endeca CAS shutdown service port, by default is set to 8506(use this port if not already binded by other processes).
            • The ENDECA_TOOLS_ROOT and ENDECA_TOOLS_CONF should already been defined in the system and the Content Acquisition System will retrieve this information from the user profile so this step will be skipped by default.
            • Provide the hostname of the machine on which CAS is installed

            Step 5 - Validating the Oracle Endeca Commerce setup

            In order to validate the setup start the Endeca services as follow:
            • Start PlatformServices using the startup.sh script in /u01/oracle/endeca/PlatformServices/11.0.0/tools/server/bin
            • Start the Content Acquisition System using the cas-service.sh script in /u01/oracle/endeca/CAS/11.0.0/bin (better start this service in background using the command nohup ./cas-service.sh &)
            • Start the Workbench using the startup.sh script in /u01/oracle/endeca/ToolsAndFrameworks/11.0.0/server/bin
            • Connect to the Workbench console available at http://<host>:8006 and check that in the console page a Data
            • Control section appears (to confirm that the CAS has been successfully installed, also if this console is accessible then the CAS is running on the host).

            Step 6 - Try to deploy the reference application

            To deploy the reference application run the deployment template pointing the discovery reference application.
            Follow this step:
            Access to the deployment template folder under ToolsAndFrameworks (ie. /u01/oracle/endeca/ToolsAndFrameworks/11.0.0/deployment_template/bin)
            • Run the deployment command ./deploy.sh --app /u01/oracle/endeca/ToolsAndFrameworks/11.0.0/reference/discover-data-catalogintegration/deploy.xml
            • Press Return
            • Digit Y and press Return
            • Provide an application name (ie. test)
            • Provide the deployment directory (/u01/oracle/endeca/apps)
            • provide the EAC port (by default is 8888)
            • Provide the CAS installation folder (ie. /u01/oracle/endeca/CAS/11.0.0)
            • Provide teh CAS version (ie. 11.0.0)
            • Provide the hostname where CAS is running
            • Provide the CAS port (byd efault is 8500)
            • Provide the default language (by edfault is english)
            • Provide the Workbench port (by default is 8006)
            • Provide the port that the catalogue should use for the DGraph process (if not already used accept the default value 15000)
            • Provide the port that the catalogue should use for the Authoring DGraph process (if not already used accept the default value 15002)
            • Provide the port that the catalogue should use for the Log server process (if not already used accept the default value 15010)
            • Press return
            • Press return
            To start the reference application follow this steps:
            • Access to the control folder located under the main application catalogue folder.
            • Run the ./initialize_services.sh script
            • Run the script ./load_baseline_test_data.sh
            • Run the script ./baseline_update.sh
            • Run the script ./promote_content.sh
            • Access to the reference application (http://<host>:8006/endeca_jspref) provide as host localhost and as port the DGraph port provided during the application deployment phase (in this case 15000) and check that the data has been succesfully loaded

            Optional: Assembler service application Deployment

            1. Copy the folder  ENDECA SERVER/ToolsAndFrameworks/3.1.2/reference/discover-service into a folder with a different name (for instance new-discover-service) within the same folder (so you'll have  ENDECA SERVER/ToolsAndFrameworks/3.1.2/reference/new-discover-service).
            2. Edit the ENDECA SERVER /ToolsAndFrameworks/3.1.2/reference/new-discover-service/WEB-INF/assembler.properties with the correct port and host of the application previously deployed (Step 6).
            3. Copy the file ENDECA_SERVER/ToolsAndFrameworks/3.1.2/server/workspace/conf/Standalone/localhost/discover.xml into a file with a different name (for instance new-discover-service.xml), customize it with the path of the application created at point 1 (ENDECA SERVER/ToolsAndFrameworks/3.1.2/reference/new-discover-service), and copy the file into the same folder (so you'll have ENDECA_SERVER/ToolsAndFrameworks/3.1.2/server/workspace/conf/Standalone/localhost/new-discover-service.xml).
            4. Access to the Assembler application just deployed via REST interface using this url (update host and port if you need): http://54.228.239.224:8006/new-discover-service/json/services/guidedsearch?Ntt=<SearchKeyword>


            Please let me know for any issues!!!

            Saturday, 22 November 2014

            SOA11 and Coherence integration case study

            In one of the project where I've worked on, the customer was facing issues due to the many web services calls triggered by their portal. Basically the Service Bus was crashing since it was reaching its limit in terms of web services transactions supported.

            The initial architecture



            The logged in users in Web Center portal were triggering calls trough the ADF framework to the OSB, and those calls were validated and enriched and then routed to the target third party services. All those operation were synchronous.

            The analysis

            We decided to take a deeper look to understand which services were being called, how many times and with which frequency, etc.
            The tool we used here, was the Statistics in the OSB. We enabled them on all the Web Services, we run the Portal, and played with the functionality, which were calling the WS under analysis.

            After 10 minutes we got the following result, where we noted 2 main things:

            1. There were some WS operations called very frequently (1 each 20 msecs for each logged users) like GetTask and GetCurrentState, while all the others were being called only few times.
            2.  The WS calls were getting information only for the specified WC Portal user which was triggering theggering thise calls.




            The Solution

            We spoke with the developers of the third party WS, proposing them to move to a "bundle" approach. Basically we asked them to expose two more operations, GetBundleTask and GetBundleState, which respectively were like GetTask and GetCurrentState, but with a list of usersId/taskId in input.
            We configured a new Coherence server, thinking about using an asynchronous approach and the caching of the data. The architecture now looks like this:


            Benefits:
            • The WebCenter/ADF layer did not need any modification since we did not touch the OSB WS interfaces.
            • The SOA Async process was getting data in bundle for all the Users logged in in one call (or very few) with much better results.
            • The services exposed on the OSB were getting the data 99% of the time from the local cache, so the network latency was now almost zero and the average response time was 30 times less.


            Here are the new statistics after applying those changes. The number of calls to getTask and getCurrent state external WebServices from the OSB were dropped of the 99%, all the data were coming from the cache.



            Here the Async BPEL service was facing the same latency calling the bundle services then the singe OSB services, but at least this time we were calling them with a list of users.

            Here are some sequence diagram which explains now the sequence of the operations. The OSB Service now try to get the data from Coherence, if it is  not found it get the same from the OLD external services, and "subscribe" the users for update.

            The SOA Business service was calling the external bundle services based on the "subscribed" users list.