Hi All,
I am writing this blog to share my knowledge on how we:
- Upgraded Alfresco from ACS5.2 to ACS7.1 (using instance cloning technique)
- Pointed ACS5.x and ACS7.1 to the same S3 (but different RDS)
- Performed delta indexing to carry offline indexing in background and then on day of cut-over, just index the pending transactions,
- Setup solr sharding with Alfresco (ACS7.1) and Alfresco search services 2.0.3 (2.0.2 has some bugs or issues which does not allow failed txns or nodes to reindex. )
- Performed full contentless reindexing of alfresco repository.
First of all, it's very imp to have a deployment architecture of the existing nodes/instances in an environment. Based on that you can determine the no. of nodes and the infra/hardware needed for the new instances (where upgrade has to be performed)
Create a set of template instances where the following activity is already done:
- Fresh/vanilla setup of ACS by pointing to a fresh RDS and fresh S3.
- Deployed custom project code into it and started/tested successfully
·
Before AWS team brings down the RDS for
snapshot, Dev team should collect the following details and statistics:
o
Size
of each solr shard and its optimization status
o
Generate
Solr summary and reports on each Solr shard
o
Generate
report based on facet type from Solr
o
Identify
missing Nodes and ACLs on each shard by executing custom program on each shard.
- Require support from production support team for execution on prod.
|
Instance Type |
Instance Count |
Sample Infra |
Clone from (source instance) |
|
ActiveMQ |
1 |
XX CPU core |
IP of template AMQ |
|
XX GB RAM |
|||
|
XX GB disk space |
|||
|
Tracker |
1/2/4 (as per reqt) |
|
IP of template tracker |
|
|
|||
|
|
|||
|
Shards |
2/8/12 |
|
IP of template solr |
|
|
|||
|
|
|||
|
RDS (this is the instance which will be created from the
RDS-snapshot which AWS team takes). |
1 |
Same as existing RDS of the envt. |
|
|
S3 |
- |
- |
S3 will remain the
same as existing envt. No new S3 bucket is created |
|
Transformation Node |
1 |
|
IP of template TNF node |
|
|
|||
|
|
|||
|
Arender |
1 |
|
IP of template ARender node |
|
|
|||
|
|
|||
|
ACS/Repo |
1/2/3/5 (as per reqt) |
|
IP of template tracker |
|
|
|||
|
|
|||
|
Jasper |
1/2 |
|
IP of template Jasper |
|
|
|||
|
|
|||
|
| |||
|
|
· · Ask AWS team to open ports for inter-server communication.
|
Source |
Target |
Ports |
|
All Tracker Nodes, |
Transformation Node |
61616 |
|
8090 |
||
|
8099 |
||
|
8161 |
||
|
8095 |
||
|
8100 |
||
|
AMQ Node |
61616 |
|
|
Solr |
8983 |
|
|
RDS |
1521 |
|
|
ARender |
8761 |
|
|
8080 |
||
|
S3 |
S3_Bucket |
|
|
Transformation Node |
All Tracker Nodes, |
8095 |
|
8080 |
||
|
8100 |
||
|
61616 |
||
|
AMQ Node |
61616 |
|
|
Solr |
All Tracker Nodes, |
8080 |
|
RDS |
1521 |
|
|
Arender |
All Tracker Nodes, |
8761 |
|
8080 |
||
|
All solr shards |
All solr shards |
8983 |
|
All Repo Nodes |
Solr LB |
80 |
|
All tracker nodes |
Solr LB |
80 |
|
All Repo Nodes |
All Repo nodes |
5701 |
|
All tracker nodes |
All tracker nodes |
5701 |
|
All tracker nodes |
All repo nodes |
5701 |
|
All
repo nodes |
All tracker nodes |
5701 |
| All Scheduler Nodes, Integ node | All
Repo nodes, All tracker nodes | 5701 |
All Repo Nodes, | All scheduler nodes, integ node | 5701 |
·
Check/verify the infra received on each
instance – disk space, CPU cores as well as memory
o
Disk space à df -h
o
Memory à free
-g (to get memory in GBs)
o
CPU à top or
lscpu
·
Check through either telnet or nmap command if
the port is open and its not filtered. Check as per the table mentioned at the
beginning of this document.
·
Please note that before we stop old ACS5.2
envt, we need to gather the data of old envt so that it can be compared
against the new ACS7.1 envt when reindexing is completed.
·
In order to gather the data, follow these
steps:
o
Login to all shards solr
admin console.
o
From each shard, gather the information like numDocs,
maxDocs, index size (in GBs), lastIndexedTx, numFound,
o
So a table as below will be derived:
|
Old ACS 5.2 (sample data) |
||||||
|
IP Address |
Shard Number |
Num Docs |
Max Docs |
Size (GB) |
lastIndexedTx |
numFound |
|
Shard 1 IP |
0 |
xxxx |
xxxx |
80 |
xxxx |
xxx |
| Shard 2 IP |
1 |
xxxx |
xxxx |
95 |
xxxx |
xxx |
| Shard 3 IP |
2 |
xxxx |
xxxx |
90 |
xxxx |
xxx |
|
| ||||||
|
Total size à |
xxx |
Total numFound à |
xxx |
|||
|
Task |
Action
to be taken by |
Comments |
|
Stop old ACS5.2 prod. envt |
RMO |
Stop
order/sequence: Solr Trackers ACS |
|
Take RDS
snapshot (S3 backup taken in perf. for safety purpose but if live-sync is
enabled, it’s not required. So in Prod, if live sync is working fine and
restorable at any point of time, no need to s3 backup). |
AWS
team |
Recommended
to do this activity immediately after above task of stopping ACS5.2 Duration: RDS
–2-3 hours (for snapshot creation as well as other configs and port opening) S3
– AWS team can confirm |
|
Once RDS
snapshot is taken, AWS team to create RDS instance from the snapshot |
AWS
team |
|
|
AWS team
provides the details of RDS to dev/upgrade team |
AWS
team |
|
|
Dev/upgrade
team starts the configurations |
Dev/Upgrade
team |
|
·
Dev team (upgrade team) should connect to this
RDS host and SID from local oracle sql developer tool and verify the
connection.
·
The alfresco OOTB tables should be visible
with data when logged in with ALFRESCO_OWNER credentials
·
The project custom tables should be visible
with data when logged in with PROJECT_OWNER credentials.
AMQ SETUP – Initial (when offline indexing is to be started first
time)
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
and delete the ‘data’ folder or ‘data-OLD’ folders from here. You can rename
the data folder like data-BAK if needed. But make sure ‘data’ folder should not
exist.
·
Create a new ‘data’ folder and ‘kahadb’ folder
inside it. OR You can also delete all contents inside data
and kahadb folders. Just keep these two folders – data and kahadb.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/conf,
and edit jetty.xml.
·
Change the bean id=”jettyPort” as follows:
·
<bean
id="jettyPort"
class="org.apache.activemq.web.WebConsolePort"
init-method="start">
·
<!-- the default port number
for the web console -->
·
<!--<property
name="host" value="127.0.0.1"/>-->
·
<property name="host"
value="172.xx.1.xx3"/>
·
<property name="port"
value="8161"/>
·
</bean>
·
Save the jetty.xml file.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/bin/linux-x86-64
·
Ensure that you are logged in as amqadmin
·
Start activemq with following command: ./activemq start
·
Check if process is running with grep command.
Also check activemq logs (activemq.log under /software/ActiveMQ/apache-activemq-5.16.0/data).
·
Keep the AMQ service up and running.
AMQ SETUP - Later
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
and delete the ‘data’ folder or ‘data-OLD’ folders from here. You can rename
the data folder like data-BAK if needed. But make sure ‘data’ folder should not
exist here before we proceed with next step.
·
When old ACS5.2 Prod envt is in stopped state, we need
to extract the AMQ data from the old ACS5.2 envt.
·
Login to old jasper node 1 of ACS5.2 (172.xx.12.xx)
as this is the instance where AMQ was running.
o
Go to /opt/apache-activemq-5.15.4.
o
Ensure that AMQ is not running here.
o
Zip the data folder using the zip command. (zip -r data.zip data)
o
Once data.zip is created, transfer this zip to
new AMQ (dedicated) node of new prod. – either through rsync command (if
enabled), OR ask AWS team to transfer this zip from old to new AMQ server (under
path à /software/ActiveMQ/apache-activemq-5.16.0)
using their temp s3 bucket.
·
Login to the AMQ node with your emp id and
switch to amqadmin (su amqadmin)
·
Go to /software/ActiveMQ/apache-activemq-5.16.0
·
Verify if data.zip exists here which AWS team
copied.
·
Verify the disk space available on this
instance. Ideally sufficient space should be available for data.zip to inflate.
·
Unzip the data.zip file (unzip data.zip)
·
‘data’ folder will be created at /software/ActiveMQ/apache-activemq-5.16.0.
·
Verify the size of data folder (with du -sh
command) and compare it with the size of the old one (that of old Prod jasper
node).
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/conf,
and edit jetty.xml.
·
Change the bean id=”jettyPort” as follows:
·
<bean
id="jettyPort" class="org.apache.activemq.web.WebConsolePort"
init-method="start">
·
<!-- the default port number
for the web console -->
·
<!--<property
name="host" value="127.0.0.1"/>-->
·
<property name="host"
value="172.xx.1.xxx"/>
·
<property name="port"
value="8161"/>
·
</bean>
·
Save the jetty.xml file.
·
Go to /software/ActiveMQ/apache-activemq-5.16.0/bin/linux-x86-64
·
Ensure that you are logged in as amqadmin
·
Start activemq with following command: ./activemq start
·
Check if process is running with grep command.
Also check activemq logs (activemq.log under /software/ActiveMQ/apache-activemq-5.16.0/data).
·
Keep the AMQ service up and running.
Transformation node SETUP
o
Login with your emp id and su alfadmin
NNOTE: Use openjdk11.0.4 instead of openjdk11.0.2 to avoid alfresco server crashing issues in higher envts with high concurrency and high volume of data.
o
Check java home by java -version command. If not installed, then
follow these steps:
o
Go to /etc/profile.d
o vi java_home.sh
o
Check the following entry is present:
o export JAVA_HOME=/software/java/jdk-11.0.2
o export PATH=$PATH:$JAVA_HOME/bin
o
Hit the command: vi ~/.bash_profile
o
Check the following entry is present:
o export JAVA_HOME='/software/java/jdk-11.0.2'
o export PATH=$PATH:$JAVA_HOME/bin
o
Hit the command: source ~/.bash_profile
o
Hit the command: java -version to
verify java installed/present correctly.
o
Output as follows:
o openjdk version "11.0.2" 2019-01-15
o OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
o OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed
mode)
o
We need to ensure that Libreoffice and
ImageMagick are installed on transformation node as these softwares are
required by ATS (Alfresco Transformation Services) jars to run.
o
If not installed, follow the steps below
IMAGEMAGICK SETUP
(if not already installed)
·
Install the installation file – sudo dnf
install ImageMagick-libs-7.0.11-13.x86_64.rpm at /software
·
Also run sudo dnf install ImageMagick-7.0.11-13.x86_64.rpm
in /software
·
In above two steps if some libs are missing
install that libs first and then install rpm again
·
make sure above two rpm are installed
successfully with command “dnf list installed | grep ImageMagick” – it should
give two results
·
Set imagemagick in classpath as follows:
Go to /etc/profile.d
·
vi java_home.sh
·
Make sure you have the following entry:
·
export IMAGEMAGICK=/bin
·
export
PATH=$PATH:$IMAGEMAGICK
·
Verify the version with the command: magick -version
·
Output as follows:
·
Version:
ImageMagick 7.0.11-13 Q16 x86_64 2021-05-17 https://imagemagick.org
·
Copyright:
(C) 1999-2021 ImageMagick Studio LLC
·
License: https://imagemagick.org/script/license.php
·
Features:
Cipher DPC HDRI Modules OpenMP(4.5)
·
Delegates
(built-in): bzlib cairo djvu fontconfig freetype gslib jng jp2 jpeg lcms ltdl
lzma openexr pangocairo png ps raqm raw rsvg tiff webp wmf x xml zlib
LIBREOFFICE SETUP
(if not already installed)
·
Extract the tar file – tar -xvf
LibreOffice_6.3.5.1_Linux_x86-64_rpm.tar.gz in /software
·
Once extracted, install the rpm file – sudo rpm -ivh *.rpm in /
/software/LibreOffice_6.3.5.1_Linux_x86-64_rpm/RPMS
·
Set libreoffice in classpath as follows:
Go to /etc/profile.d
·
vi
java_home.sh
·
Make sure you have the following entry:
·
export
LIBREOFFICE=/opt/libreoffice6.3/program
·
export
PATH=$PATH:$LIBREOFFICE
·
Verify by checking version with following
command : “libreoffice6.3 –version”
·
Output as follows:
·
LibreOffice
6.3.5.1 9a62adaf9abe90e8fef419f29114b0176dd66801
Once LibreOffice and
ImageMagick are installed on Transformation node, continue the below steps:
·
Login to this transformation node with your emp
id and switch to alfadmin (su alfadmin)
·
Go to /software/alfresco-transform-service.
·
Give execution rights to ats.sh file if not
having.
·
Run ats.sh (./ats.sh start)
·
Check the logs using à tail -f
nohup* and see if any errors are found.
TRACKER SETUP
·
Check java home by java -version command. If not installed, then
follow the java setup steps as mentioned in previous section.
·
Imagemagick and libreoffice are not needed on
tracker nodes.
·
Clear all old logs as the instance was cloned
from template node.
·
Update server.xml file under /software/alfresco/alfresco-content-services/tomcat/conf
and update this property:
o
<Connector port="8080"
URIEncoding="UTF-8" protocol="HTTP/1.1"
compressibleMimeType="text/html,text/xml,text/css,text/javascript,
application/x-javascript,application/javascript"
compression="on" compressionMinSize="128"
noCompressionUserAgents="gozilla, traviata"
connectionTimeout="300000"
keepAliveTimeout="300000"
redirectPort="8443" maxHttpHeaderSize="32768" maxThreads="2000" maxConnections="2000"/>
·
Update catalina.sh file under /software/alfresco/alfresco-content-services/tomcat/bin
and update this property as per the infra sizing:
o
Currently set on all 4 tracker nodes in prod
envt.:
o
JAVA_OPTS="$JAVA_OPTS -Xms128g -Xmx350g
-XX:+UseCodeCacheFlushing -XX:NewRatio=3 -XX:SurvivorRatio=9
-XX:TargetSurvivorRatio=80 -XX:MaxTenuringThreshold=13
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=80
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled -XX:ParallelGCThreads=20
-XX:+CMSClassUnloadingEnabled -Dsun.security.ssl.allowUnsafeRenegotiation=true
-Djava.awt.headless=true
-Dalfresco.home=/software/alfresco/alfresco-content-services -Dcom.sun.management.jmxremote
-XX:ReservedCodeCacheSize=256m"
·
Update alfresco-global.properties to point to
the RDS snapshot (that you received from AWS team). Ensure the following points
are met.
o
Turn off audit
o
Turn off activities feed
o
Turn off all schedulers
o
Compare alf-global of existing prod. tracker
and add if any mandatory property is needed. (but note that any new property
you add should be compatible with ACS7.1 version).
o
Update alfresco.host and share.host
o
Keep db.schema.update=true property in
alf-global file (just on the 1st tracker) without fail for first
time schema upgrade.
o NOTE: maxPermissionsCheck value in alf-global can be increased if the default number (no. of permission checks to be performed on a node) is not enough for ACL check performed by solr during indexing.
o
o
Keep db.password as unencrypted for now
(ALFRESCO_USER). Later on, the encrypted password can be put up.
NOTE: Please login to prod instance(s)
to check the contents of alfresco-global.properties and share-config-custom.xml
file.
SHARD SETUP
Assuming that vanilla search-services 2.0.2
zip file is unzipped. So the folder structure of search-services-2.0.2 should
be present inside /software/alfresco/alfresco-search-services.
·
Do
changes in
/software/alfresco/alfresco-search-services/solrhome/templates/rerank/conf/
solrcore.properties of each solr node.
o alfresco.host=<TRACKER_IP_OR_REPO_IP_AS_PER_ARCHITECTURE>
o alfresco.port=8080
o alfresco.baseUrl=/alfresco
o
alfresco.secureComms=none
o
alfresco.socketTimeout=3600000 //increased based on high index and ACL size
# Setting below properties to increase performance
of indexing
o
merge.policy.maxMergedSegmentMB=10240
o
merge.policy.maxMergeAtOnce=5
o
merge.policy.segmentsPerTier=5
o
merger.maxMergeCount=16
o
merger.maxThreadCount=8
# Disable content indexing (as per requirement)
·
alfresco.index.transformContent=false
·
alfresco.ignore.datatype.1=d:content
#Increase the
maxBooleanClauses limit to 100000 if no. of ACLs txns and ACE counts inside each ACL txn is huge.
·
solr.maxBooleanClauses=60000
Update deletion policy in solrconfig.xml
cd /software/alfresco/alfresco-search-services/solrhome/templates/rerank/conf
vi solrconfig.xml
<!-- Enable deletion policy to delete tlog files created during
indexing. -->
<deletionPolicy class="solr.SolrDeletionPolicy">
<!-- The number of commit points to be kept -->
<str name="maxCommitsToKeep">1</str>
<!-- The number of optimized commit points to be kept -->
<str name="maxOptimizedCommitsToKeep">0</str>
<!--
Delete all commit points once they have reached the given age.
Supports DateMathParser syntax e.g.
-->
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
</deletionPolicy>
·
Start each solr – Go to
/software/alfresco/alfresco-search-services/solr/bin and run ./solr start
·
After starting each solr (vanilla
search-services), run the following URLs for each shard as follows from
browser.:
-----------------------------------------------------------------------------------------------------------------------
· Continue upto 11th Shard i.e shardIds=11
·
Running above URLs creates the core and shard
structure/taxonomy.
·
Stop all solr nodes.
·
Edit solr.in.sh and set the following
properties shard wise:
o
SOLR_JAVA_MEM="-XmsAAg -XmxBBg" //can be 248G, 378G, 750G, etc //depending on the requirement
o
SOLR_SOLR_HOST=<Shard-Host-IP>
o
SOLR_ALFRESCO_HOST=<Tracker-Host-IP>
o
SOLR_ALFRESCO_PORT=8080
·
The values for solr.in.sh located in
/software/alfresco/alfresco-search-services/ for each shard as follows –
1.
For Tracker 1/Repo 1 (based on your configuration) -
o
Shard 1 - SOLR01 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§
SOLR_ALFRESCO_PORT=8080
o
Shard 2 - SOLR02 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§
SOLR_ALFRESCO_PORT=8080
o
Shard 3 - SOLR03 -
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§
SOLR_ALFRESCO_PORT=8080
2. For Tracker 2/Repo 2 (based on your configuration) -
o
Shard 4 - SOLR04 - SOLR_IP
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§
SOLR_ALFRESCO_PORT=8080
o
Shard 5 - SOLR05 -
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§
SOLR_ALFRESCO_PORT=8080
o
Shard 6 - SOLR06 -
§
SOLR_JAVA_MEM="-XmsAAg -XmxBBg"
§
SOLR_SOLR_HOST=
§
SOLR_ALFRESCO_HOST=
§ SOLR_ALFRESCO_PORT=8080
3. Same for tracker 3 and 4.
·
Edit shared.properties in /software/alfresco/alfresco-search-services/solrhome/conf/
and uncomment the following properties if not already done on all shards. NOTE: Without uncommenting the below 3 properties, the exact term queries (like =) does not work.
o
solr.host=<SHARD-<N>-IP_ADDRESS>
o
alfresco.cross.locale.datatype.0={http://www.alfresco.org/model/dictionary/1.0}text
o
alfresco.cross.locale.datatype.1={http://www.alfresco.org/model/dictionary/1.0}content
o
alfresco.cross.locale.datatype.2={http://www.alfresco.org/model/dictionary/1.0}mltext
·
Verify solrcore.proprties in /software/alfresco/alfresco-search-services/solrhome/rerank--alfresco--shards--12-x-1--node--1-of-1/alfresco-n/conf
on each shard. The tracker IP address should be rightly configured.
o
data.dir.root=/software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore
o
alfresco.host=TRACKER_IP
o
shard.count=12
o
shard.instance=0
o
data.dir.store=alfresco-0
o
alfresco.port=8080
o
alfresco.baseUrl=/alfresco
o
alfresco.fingerprint=false
o
alfresco.socketTimeout=3600000
o
alfresco.secureComms=none
o
alfresco.metadata.ignore.datatype.1=app\:configurations
o
alfresco.metadata.ignore.datatype.0=cm\:person
o
merge.policy.maxMergeAtOnce=5
o
merge.policy.segmentsPerTier=5
o
merge.policy.maxMergedSegmentMB=10240
o
merger.maxMergeCount=16
o
merger.maxThreadCount=8
·
Verify solrconfig.xml in /software/alfresco/alfresco-search-services/solrhome/rerank--alfresco--shards--12-x-1--node--1-of-1/alfresco-1/conf.
The following entry should be present:
<!-- Enable deletion policy to delete
tlog files created during indexing. -->
<deletionPolicy class="solr.SolrDeletionPolicy">
<!-- The number of commit points to be kept -->
<str name="maxCommitsToKeep">1</str>
<!-- The number of optimized commit points to be kept -->
<str name="maxOptimizedCommitsToKeep">0</str>
<!--
Delete all commit points once they have reached the given age.
Supports DateMathParser syntax e.g.
-->
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
</deletionPolicy>
o
·
All shard configurations are ready now.
Delete the contents inside /software/alfresco/alfresco-search-services/solrhome/alfrescoModels and also inside /software/alfresco/alfresco-search-services/indexes/workspace-SpacesStore/alfresco-0/index , so if there are any existing indexes, it will be cleared. So that we can start with fresh indexing on solr startup.
Start 1st Tracker/Repo
·
./alfresco.sh start from
/software/alfresco/alfresco-content-services; tail the logs – tail -f
catalina.out from /software/alfresco/alfresco-content-services/tomcat/logs
·
Note the time to server start and update the
schema (1 or 2 mins max time taken)
·
Verify the config in /alfresco url like ACS7.1
version, audit disabled, etc.
o
If following errors come up while starting
alfresco:
§
Address bind exception : Port 5701 already in use, OR
Hazelcast cannot start. Port [5701] is already in use and auto-increment is
disabled. Then –
·
Stop Alfresco
·
Stop Arender service running on the same
machine.
·
Start Alfresco
·
Start Arender service
§ ERROR [web.context.ContextLoader]
[main] Context initialization failed
§ org.alfresco.error.AlfrescoRuntimeException:
01220021 Not all patches could be applied ,
### Error updating database.
Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for
insert into a LONG column
§
§ ### The error may involve
alfresco.appliedpatch.update_AppliedPatch-Inline
§ ### The error occurred while setting
parameters
§ ### SQL: update
alf_applied_patch set description = ?, fixes_from_schema = ?, fixes_to_schema = ?, target_schema = ?, applied_to_schema = ?, applied_on_date = ?, applied_to_server = ?, was_executed = ?, succeeded = ?, report = ? where id = ?
§
### Cause: java.sql.SQLException: ORA-01461: can bind a
LONG value only for insert into a LONG column. Then –
·
Stop Alfresco
·
Go to oracle sql developer tool, login with
ALFRESCO_OWNER.
·
Check the ALF_APPLIED_PATCH table for the id
found in above error.
·
If no entry for this ID exists , insert a new
entry by running the following query.
·
insert into
ALF_APPLIED_PATCH values(
'ID',
'NAME',
0,
NUM,
NUM,
99999,
'DATETIME',
'ALF_VERSION',
1,
1,
'TEXT_WITH_NODE_PATH_AND_NODEREF');
§
While
starting alfresco, the logs might get stuck and not move forward even after
waiting for 10-15 mins. Then –
·
Stop Alfresco
·
Clear the contents of /temp and /work dir in
tomcat
·
Start Alfresco
·
Give it some time to start successfully.
·
Check the solr config in /alfresco/s/enterprise/admin/admin-searchservice
page:
o
Content Tracking enabled checkbox should be
selected by default
o
Solr hostname property should be the correct
solr LB URL and Solr port (non-ssl) should be 80.
Solr port (ssl) let it be 8443.
o
After making above changes, click Save button
at the bottom of the page.
o
This value will be persisted to the alfresco
DB.
o
So when you start and access other trackers or
repo nodes in future, this same value will be displayed on this page.
o
Perform the above steps on all trackers and
repo nodes
·
Check the config in /alfresco/s/enterprise/admin/admin-flocs
page:
o
Dynamic shard instance registration checkbox
should be selected by default
o
12 shards (for perf 12 shards) would be
displayed below.
o
Has content radio button will be disabled
(red) as contentless indexing is done
o
If you notice double no. of shards (ex: 24 instead
of 12), the other 12 shards would be in silent state and not in active state.
You can click ‘clean’ button on this page, it will remove the silent ones.
o Perform the above steps on all trackers and repo nodes.
·
Stop ACS - ./alfresco.sh stop from
/software/alfresco/alfresco-content-services
·
So we started tracker 1, allowed it to upgrade
the DB schema, and stopped it.
·
Now we can apply the same configuration in
other 3 trackers and start them.
Start all Trackers and its respective Shards
·
Comment the db.schema.update=true property in
alfresco-global.properties file if its not commented. Mostly this property will
be present only in tracker 1 but check in all trackers. Comment it in all
trackers before proceeding with next step.
·
Start each Tracker by going to /software/alfresco/alfresco-content-services/
and run ./alfresco.sh start and tail the logs, and verify from browser url (/alfresco)
once up.
·
·
For each shard, perform the following steps:
o
Start solr - ./solr/bin/solr start from
/software/alfresco/alfresco-search-services
o
Check solr logs. Folder structure of solr
shard should be created.
o
Verify details in solr admin console from
browser url
o
Allow the indexing to start (it will take some
time approx. 30 mins for indexing to start and the txRemaining and numFound
count to change)
o
Monitor the indexing.
o
Check solr logs for any errors
o
Monitor the memory usage on trackers as well
as shards for high utilization or disk space not getting full.
REPO SETUP
·
Ask for provisioning the no. of repo nodes (as per reqt) with ports
opened.
·
If already provisioned, start the following
configurations.
o
Put share.war from vanilla distribution zip to
this instance tomcat/webapps on both repo nodes
o
Edit/uncomment share.xml file in
/software/alfresco/alfresco-content-services/tomcat/conf/Catalina/localhost
o
Copy the latest project custom code/jars in alfresco-content-services/modules/platform and /alfresco-content-services/modules/share
folder.
o
Check if the amps are applied in alfresco.war
and share.war by hitting this command (from /alfresco-content-services/bin)
:
§
java -jar alfresco-mmt.jar list
../tomcat/webapps/alfresco.war
§
java -jar alfresco-mmt.jar list
../tomcat/webapps/share.war
o
If saml-repo amp, javascript-console amp are
not displayed in the list, then run the following two points by applying those
amps.
o
Apply saml and javascript console amps from
amps folder to alfresco.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps/alfresco-saml-repo-1.2.1.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/alfresco.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps/javascript-console-repo-0.7.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/alfresco.war
o
Apply saml and javascript console amps from
amps_share folder to share.war
§
java -jar alfresco-mmt.jar install
/software/alfresco/alfresco-content-services/amps_share/alfresco-saml-share-1.2.1.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/share.war
§
java -jar alfresco-mmt.jar install /software/alfresco/alfresco-content-services/amps_share/javascript-console-share-0.7.amp
/software/alfresco/alfresco-content-services/tomcat/webapps/share.war
o
Update alfresco-global.properties:
§
rds details
§ url details like Transform core properties
compare with existing (old) repo nodes and add if any mandatory properties are needed
o Update share-config-custom.xml with details like repository-url, endpoint-url for alfresco, alfresco-api and alfresco-feed.
If there are your custom share jars, place it under modules/share.
o
Start alfresco - ./alfresco.sh start at /software/alfresco/alfresco-content-services;
tail the logs – tail -f
catalina.out at /software/alfresco/alfresco-content-services/tomcat/logs
o
Verify from browser urls once up.
o
If following errors come up while starting
alfresco:
§
Address bind exception : Port 5701 already in use, OR
Hazelcast cannot start. Port [5701] is already in use and auto-increment is
disabled. Then –
·
Stop Alfresco
·
Stop Arender service running on the same
machine.
·
Start Alfresco
Start Arender service
§
While
starting alfresco, the logs might get stuck and not move forward even after
waiting for 10-15 mins. Then –
·
Stop Alfresco
·
Clear the contents of /temp and /work dir in
tomcat
·
Start Alfresco
·
Give it some time to start successfully.
·
ArenderHMI deployment and configuration on all
5 repo nodes
o
Stop alfresco - ./alfresco.sh stop at /software/alfresco/alfresco-content-services
o
Copy ARenderHMI.war to tomcat/webapps folder
Edit arender.properties file – vi arender.properties from /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/classes. Set the values accordingly and save. The following property might be required to be added if you get wss socket error while loading arender preview page.
§ arender.web.socket.enabled=false
Edit arender-server-custom-alfresco.properties at /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/classes. Set the values accordingly and save:
§
arender.server.rendition.hosts=http://IP_ADDRESS:8761/
§ arender.server.alfresco.atom.pub.url=http://localhost:8080/alfresco/api/-default-/cmis/versions/1.1/atom
§ arender.server.alfresco.soap.ws.url=http://localhost:8080/alfresco/cmisws/cmis?wsdl
§ arender.server.url.parsers.beanNames=customCmisUrlParser,DefaultURLParser,DocumentIdURLParser,FileattachmentURLParser,ExternalBeanURLParser,AlterContentParser,FallbackURLParser
§ arender.server.alfresco.use.soap.ws=true
arender.server.alfresco.annotation.path=/Data Dictionary
In the same location, edit arender-custom-server-integration.xml, set the values accordingly and save:
§
<?xml version="1.0"
encoding="UTF-8"?>
<beans
default-lazy-init="true" default-autowire="no"
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
xml imported by ARender Java Web Server side, please add any customization you
wish to see loaded in this file-->
<bean id="customCmisUrlParser"
class="com.arondor.viewer.cmis.CustomCMISURLParser">
<property
name="cmisConnection" value="cmisConnection"/>
<property name="alfHost"
value="http://localhost:8080"/>
</bean>
<bean
id="xfdfAnnotationAccessor" class="com.arondor.viewer.xfdf.annotation.CustomXFDFAnnotationAccessor"scope="prototype">
<property
name="contentAccessor">
<bean
class="com.arondor.viewer.xfdf.annotation.FileSerializedContentAccessor">
<property name="path"
value="annotations/"/>
</bean>
</property>
<property name="alfHost"
value="http://localhost:8080"/>
<property name="annotationCreationPolicy">
<bean
class="com.arondor.viewer.client.api.annotation.AnnotationCreationPolicy">
<property
name="canCreateAnnotations" value="true"/>
<property
name="textAnnotationsSupportHtml" value="true"/>
<property name="textAnnotationsSupportReply"
value="true"/>
<property
name="textAnnotationsSupportStatus" value="true"/>
<property
name="annotationsSupportSecurity" value="false"/>
<property
name="availableSecurityLevels">
<ref bean="availableSecurityLevels"/>
</property>
<property
name="annotationTemplateCatalog">
<ref
bean="annotationTemplateCatalog"/>
</property>
</bean>
</property>
</bean>
<bean
id="annotationAccessorFactory"class="com.arondor.viewer.common.annotation.BeanAnnotationAccessorFactory">
<property name="beanName"
value="xfdfAnnotationAccessor"/>
<property
name="fallBackBeanNames"ref="fallBackAnnotationAccessorBeanNames"
/>
</bean>
<bean id="cmisConnection" class="com.arondor.viewer.cmis.CMISConnection"scope="prototype">
<property
name="atomPubURL"value="${arender.server.alfresco.atom.pub.url}"/>
<property
name="soapWSURL"value="${arender.server.alfresco.soap.ws.url}"/>
<property
name="annotationsPath"
value="${arender.server.alfresco.annotation.path}"/>
<property
name="annotationFolderName"
value="${arender.server.alfresco.annotation.folder.name}"/>
<property name="useSoapWS"
value="${arender.server.alfresco.use.soap.ws}"/>
<property name="user"
value="${arender.server.alfresco.user}"/>
<property name="password"
value="${arender.server.alfresco.password}"/>
</bean>
</beans>
Ensure that the required jars are present at /software/alfresco/alfresco-content-services/tomcat/webapps/ARenderHMI/WEB-INF/lib (this is needed if you use arender.server.alfresco.use.soap.ws=true )
§
jaxws-api-2.2.11.jar
§ javax.jws-3.0.jar
§ arondor-arender-for-company-project-4.6.0-beta0.jar
§ saaj-api-RELEASE120.jar
json-20160810.jar
o
Create a new file “application.properties” on
path “/software/ARender4.7.1/modules/TaskConversion/”
with below details
-
# soffice path (used only in Libreoffice
context)
rendition.soffice.path=/opt/libreoffice6.3/program/soffice
o
Start alfresco and tail the logs.
o
Verify from browser urls once up.
ARENDER SETUP
(Arender Rendition engine setup) on dedicated Arender node
·
Login with alfadmin user.
·
On production, the Arender node was cloned
from Performance Arender node. So skip “Part A” and jump to “Part B”. If not
cloned, and Arender has to be setup from scratch and follow Part A as well as
Part B.
·
Part A:
·
Ensure that LibreOffice is installed here on
this node. Follow the same steps of LibreOffice installation mentioned in this
document, and check if LibreOffice is set in classpath as well as running this
command “libreoffice6.3
--version” and “sudo libreoffice6.3
--version” shows the correct output.
·
Also ensure the java is installed and it’s
classpath is set correctly as well as running this command “java -version” and “sudo java -version” shows
the correct output.
·
Reach out to AWS/infra team if both or one of
them are not working.
·
Check with following commands to check env
variables
o
env
o
show-environment
·
To set environment variable, you can set as
follows:
o
Syntax: set-environment
VARIABLE_NAME=VARIABLE_VALUE
o
Example: set-environment
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/software/jdk-11.0.2/bin:/opt/libreoffice6.3/program:/bin:/usr/lib:/usr/local/lib
o
Make sure that you copy the existing PATH
variable value and then add/append your variable
·
·
Go to the location where rendition-engine jar
file (rendition-engine-installer-4.7.1-rendition.jar) is present.
·
Before starting the installation, create
folder “ARender4.7.1” under /software.
·
Start the installation (with alfadmin) using
the command –>
java -jar
rendition-engine-installer-4.7.1-rendition.jar
On path prompt of where to install Arender, provide this path à /software/Arender4.7.1
·
It will prompt for entering the username: Provide
your user as username
·
Part B:
·
Once the installation completes, go to
/etc/systemd/system location, edit the file ARenderRenditionEngineService.service
and change the content as follows:
·
[Unit]
Description=ARender rendition engine service
After=syslog.target
·
·
[Service]
User=alfadmin
ExecStart=/software/ARender4.7.1/service/unix/service-mode-rendition-engine-4.7.1.jar
SuccessExistStatus=143
·
·
[Install]
WantedBy=multi-user.target
·
·
Verify through ps aux| grep ARender and
confirm ARender service is running or not. If not , go to /etc/systemd/system,
and run the command:
o sudo systemctl start
ARenderRenditionEngineService.service
·
To stop, sudo systemctl stop
ARenderRenditionEngineService.service
·
To check status, sudo systemctl status ARenderRenditionEngineService.service
·
If arender service does not start
successfully, then:
o
Check if Libreoffice is installed correctly on
this node
Connect with AWS/infra team if Arender service is not running. Linux level (service rights) fix can be applied to make it work.
Sanity Check
·
Team should verify through browser urls of
repo nodes as well as solr nodes and check if all are accessible
·
Repo nodes’ IP specific URL should be
accessed and share login to be done with admin and a non-admin user and see if
search and other basic functionalities are working fine.
·
LB url should be accessed and login,
search, and other basic functionalities should be checked.
Post go-live few changes might be needed
Day 1
·
Tomcat server.xml parameters were added for
compression and maxthreads.
·
Missed firewall port opening 5701 was
implemented
Day 2
·
JVM garbage collection parameters were added
·
ALFRESCO_OWNER schema stats gather and Index
Rebuild
·
Solr formData limit was changed from 2 MB to 2
GB
·
Solr number of facets was changed to 40 in
solrcore.properties
Day 3
·
Ulimit change – Number of open files to
unlimited (check with ulimit – a command)
·
Alfresco Node Change –
o
Custom-tx-cache-context.xml file was added into tomcat/shared/classes/alfresco/extension
directory.
o
solr.http.connection.timeout=0
o
solr.http.connection.timeout=0
o
search.solrTrackingSupport.ignorePathsForSpecificAspects=true
o
search.solrTrackingSupport.ignorePathsForSpecificTypes=true
·
DB
Change –
o
OPTIMIZER_INDEX_COST_ADJ=5
o
OPTIMIZER_INDEX_CACHING=50
PROJECTNAME_OWNER schema gather statistics was triggered
very wonderfull blog, content is so insightful and full of information. Thanks for giving us your valuable time by sharing this blog.
ReplyDeleteRead my blog: THE ROLE OF A FULL STACK DEVELOPER IN MODERN SOFTWARE DEVELOPMENT