Thursday, October 29, 2015
Larry Ellison on self
“When you write a program for Android, you use the Oracle Java tools for everything, and at the very end you push a button and say, ‘Convert this to Android format.'” - Larry Ellison, 2013, CBS interview.
Oracle 10g on Centos 6.7 - x86 issues: ntcontab.o, snmccolm.o, ORA-27125
Abstract
Its getting harder and harder to install good old 10g on newer Centos versions. Last attempt on 6.5 was semi-problematic, with some extra packages missing, 6.7 deployment was even more challenging.
System specifications
Centos version (/etc/issue): CentOS release 6.7 (Final)
Oracle version: Version 10.2.0.1.0 Production (10201_database_linux_x86_64.cpio)
Problems and solutions
#1: Error invoking target 'ntcontab.o' of makefile
This error occurred around 65% in installation progress, aborting is not an option - more errors will follow and in the end whole process fails. Did some testing and it appears that one process is building the file, next one is instantly deleting it:
# cp /misc/oracle/product/10.2.0/db_1/lib32/ntcontab.o /misc/oracle/product/10.2.0/db_1/lib/
But in the end it looks like it was one of the following (where not needed in 6.5):
# yum install libaio-devel.i686 -y
# yum install zlib-devel.i686 -y
# yum install glibc-devel -y
# yum install glibc-devel.i686 -y
# yum install libaio-devel -y
# yum install ksh -y
# yum install glibc-headers
#2: /misc/oracle/database/product/10.2.0/db_1/sysman/lib/snmccolm.o: could not read symbols: File in wrong format
It might be that it may be solved with some more x86 packages thrown into the pile, I was not able to find the exact culprit. Many of the sources online simply telling to ignore this and fix it with 10.2.0.4 patch. Thats what we'll do: Ignore.
#3: ORA-27125: unable to create shared memory segment
Reason is unknown, it was thrown by DBCA with continuous installation process.
The solution is very simple, first check the oracle user group information:
[oracle@storage] $ id oracle
uid = 500 (oracle) gid = 502 (oinstall) groups = 502 (oinstall), 501 (dba)
[oracle@storage] $ more /proc/sys/vm/hugetlb_shm_group
0
Execute the following command as root, the dba group is added to the system kernel:
[oracle@storage] $ echo 501 > /proc/sys/vm/hugetlb_shm_group
Continue with DBCA, step will fail, but then retry DBCA, the problem disappeared and database was created.
#4: bonus problem, not Centos 6.7 related: You do not have enough free disk space to create the database
My bad was that I had 12TB storage mounted, looks like I missed the 10g storage requirement: 400MB, but less then 2TB. Found no other solution then to unmount /dev/sdb1, shrink it with gparted to get ~500GB space, ext4 it and mount the new device /dev/sdb2 to another mount point /oracle.
Conclusion
Move to 11i or 12c, its about time.
Its getting harder and harder to install good old 10g on newer Centos versions. Last attempt on 6.5 was semi-problematic, with some extra packages missing, 6.7 deployment was even more challenging.
System specifications
Centos version (/etc/issue): CentOS release 6.7 (Final)
Oracle version: Version 10.2.0.1.0 Production (10201_database_linux_x86_64.cpio)
Problems and solutions
#1: Error invoking target 'ntcontab.o' of makefile
This error occurred around 65% in installation progress, aborting is not an option - more errors will follow and in the end whole process fails. Did some testing and it appears that one process is building the file, next one is instantly deleting it:
# cp /misc/oracle/product/10.2.0/db_1/lib32/ntcontab.o /misc/oracle/product/10.2.0/db_1/lib/
But in the end it looks like it was one of the following (where not needed in 6.5):
# yum install libaio-devel.i686 -y
# yum install zlib-devel.i686 -y
# yum install glibc-devel -y
# yum install glibc-devel.i686 -y
# yum install libaio-devel -y
# yum install ksh -y
# yum install glibc-headers
#2: /misc/oracle/database/product/10.2.0/db_1/sysman/lib/snmccolm.o: could not read symbols: File in wrong format
It might be that it may be solved with some more x86 packages thrown into the pile, I was not able to find the exact culprit. Many of the sources online simply telling to ignore this and fix it with 10.2.0.4 patch. Thats what we'll do: Ignore.
#3: ORA-27125: unable to create shared memory segment
Reason is unknown, it was thrown by DBCA with continuous installation process.
The solution is very simple, first check the oracle user group information:
[oracle@storage] $ id oracle
uid = 500 (oracle) gid = 502 (oinstall) groups = 502 (oinstall), 501 (dba)
[oracle@storage] $ more /proc/sys/vm/hugetlb_shm_group
0
Execute the following command as root, the dba group is added to the system kernel:
[oracle@storage] $ echo 501 > /proc/sys/vm/hugetlb_shm_group
Continue with DBCA, step will fail, but then retry DBCA, the problem disappeared and database was created.
#4: bonus problem, not Centos 6.7 related: You do not have enough free disk space to create the database
My bad was that I had 12TB storage mounted, looks like I missed the 10g storage requirement: 400MB, but less then 2TB. Found no other solution then to unmount /dev/sdb1, shrink it with gparted to get ~500GB space, ext4 it and mount the new device /dev/sdb2 to another mount point /oracle.
Conclusion
Move to 11i or 12c, its about time.
Monday, October 5, 2015
Server monitoring recipe with SNMP: Observium + Nagios
Objective
Everyone having at least couple of servers, even a single server, would want to monitor it eventually. Some time ago I used MRTG for all that, but as the needs expanded I could do less and less with it and in the end it even became too complicated to use. MRTG is powerful, yet vulnerable to simple server restarts - you have to remap your pins.
Recipe
Will jump to it right away: the best option currently is Observium + Nagios. Will tell about the first one in a separate paragraph, it might suffer from an early death some day, but currently its a good tool for the job. I ended up using two tools because Nagios has a very good alerting system, but lacks interfaces and as you probably guessed already Observium has interfaces, but lacks alerting system.
Observium
The peckers behind this tool are pretty questionable. Some time ago they had a fundraising campaign to collect some doe and implement an alerting system. After funds were raised - they removed the promised functionality from the Community release and made it part of their paid version. Money is money, but hey, Internet knows everything. Further more I tried to communicate with them on Facebook - all my page messages and comments where removed and all PM's ignored.
Nevermind the folks, their tool is good for one thing - drawing nice charts:
Nagios
Where Observium fails - Nagios can help. Its an open source project, no need to tell more. It lacks interfaces and historical information (excluding payed plugins and extensions), but it has a powerful alerting system. Just setup a couple of users with emails and you are done:
Conclusions
Nagios allows you to receive an email in the middle of the forest when your backup drive hits a warning limit while Observium helps you analyze and plan you infrastructure, workloads and record historical events. I can now see that admin still hasnt added memory to our webserver and I asked for that a week ago.
Everyone having at least couple of servers, even a single server, would want to monitor it eventually. Some time ago I used MRTG for all that, but as the needs expanded I could do less and less with it and in the end it even became too complicated to use. MRTG is powerful, yet vulnerable to simple server restarts - you have to remap your pins.
Recipe
Will jump to it right away: the best option currently is Observium + Nagios. Will tell about the first one in a separate paragraph, it might suffer from an early death some day, but currently its a good tool for the job. I ended up using two tools because Nagios has a very good alerting system, but lacks interfaces and as you probably guessed already Observium has interfaces, but lacks alerting system.
Observium
The peckers behind this tool are pretty questionable. Some time ago they had a fundraising campaign to collect some doe and implement an alerting system. After funds were raised - they removed the promised functionality from the Community release and made it part of their paid version. Money is money, but hey, Internet knows everything. Further more I tried to communicate with them on Facebook - all my page messages and comments where removed and all PM's ignored.
Nevermind the folks, their tool is good for one thing - drawing nice charts:
Nagios
Where Observium fails - Nagios can help. Its an open source project, no need to tell more. It lacks interfaces and historical information (excluding payed plugins and extensions), but it has a powerful alerting system. Just setup a couple of users with emails and you are done:
Conclusions
Nagios allows you to receive an email in the middle of the forest when your backup drive hits a warning limit while Observium helps you analyze and plan you infrastructure, workloads and record historical events. I can now see that admin still hasnt added memory to our webserver and I asked for that a week ago.
Saturday, August 29, 2015
Gmail attachment cleanup. Cleanup large gmail letters
Even though Gmail gives you quite a lot of storage, for years I have developed a habit to clean up the trash. It just feels right.
Type in "size:20000000" for filtering mails with attachments >20MB. Seach query is in bytes, filter by any size limit you want. "size:5000000" for emails with larger then 5MB attachments.
Once you find the monstrous mails - you can either delete it all or just one of the messages. If you open the conversation - large messages are always expanded. This is just great if you have some development material which is out of date, but you would still like to save the conversation for later. Just select "Delete this message" from message tools.
Type in "size:20000000" for filtering mails with attachments >20MB. Seach query is in bytes, filter by any size limit you want. "size:5000000" for emails with larger then 5MB attachments.
Once you find the monstrous mails - you can either delete it all or just one of the messages. If you open the conversation - large messages are always expanded. This is just great if you have some development material which is out of date, but you would still like to save the conversation for later. Just select "Delete this message" from message tools.
Friday, July 10, 2015
Cuba Libre
Here is another break from all the typing.
---------------------------------------------------- * 2 oz (50ml) white rum Place the ice into a highball glass. * 1 lime Pour over the rum. Some more rum. * Cola Cut the lime in quarters. Squeeze * Ice one lime in (optional). Drop all the limes in. Top it up with Cola. -------------------------------------------------------------
žymės:
Cocktail
More Java grants on Oracle. ORA-29532 java.io.FilePermission
Abstract
I got a simple file system writer/reader, it starts with Oracle Directory alias and continues generating folders using organization number and some bits of date. Alias part is static, the rest.. ..is suppose to be generated infinitely. Not including mount, ownership and permission details, basically your main folder and subfolders have to be fully available to user running Oracle.
Short spec
Oracle Directory: /attachments/ (alias ATTACHMENTS)
Organization id: 301
Todays date monthly token: 0715
Schema in use: AWS
Error
Lets start with stack trace:
Possible fixes
Thing is you need write permissions in your Oracle dir, but in this case its recursive and never ending. I start with this:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/*', 'write' );
END;
Small bit that made me spend couple of hours was recursive Java grant, just use dash "-" instead of "*" and grant will be valid for all your subdirectories:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/-', 'write' );
END;
Just in case you need more then write - use full fleet of file permission types:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/-', 'read,write,delete' );
END;
I got a simple file system writer/reader, it starts with Oracle Directory alias and continues generating folders using organization number and some bits of date. Alias part is static, the rest.. ..is suppose to be generated infinitely. Not including mount, ownership and permission details, basically your main folder and subfolders have to be fully available to user running Oracle.
Short spec
Oracle Directory: /attachments/ (alias ATTACHMENTS)
Organization id: 301
Todays date monthly token: 0715
Schema in use: AWS
Error
Lets start with stack trace:
<...>
java.security.AccessControlException: the Permission (java.io.FilePermission /attachments/301/0715/19871_head.txt write) has not been granted to AWS. The PL/SQL to grant this is dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/301/0715/19871_head.txt', 'write' )
<...>
oracle.jdbc.driver.OracleSQLException: ORA-29532: Java call terminated by uncaught Java exception: java.security.AccessControlException: the Permission (java.io.FilePermission /attachments/301/0715/19871_head.txt write) has not been granted to AWS. The PL/S
QL to grant this is dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/301/0715/19871_head.txt', 'write' )
<...>
Possible fixes
Thing is you need write permissions in your Oracle dir, but in this case its recursive and never ending. I start with this:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/*', 'write' );
END;
Small bit that made me spend couple of hours was recursive Java grant, just use dash "-" instead of "*" and grant will be valid for all your subdirectories:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/-', 'write' );
END;
Just in case you need more then write - use full fleet of file permission types:
BEGIN
dbms_java.grant_permission( 'AWS', 'SYS:java.io.FilePermission', '/attachments/-', 'read,write,delete' );
END;
Thursday, July 2, 2015
Leap second bug 2015. Linux/Centos, 100% CPU: Java, Oracle, OPMN, Tomcat
Oh dear, looks like there are services having serious issues with the Leap Second added last night. Read more about Leap Second 2015 in Wiki. Fix is simple:
# service ntpd stop; date -s "`date`";service ntpd start;
or
# /etc/init.d/ntpd stop; date -s "`date`"; /etc/init.d/ntpd start;
The problem occurred on an older Java/Oracle running webserver. All CPU's went 100% high. All services that had anything to do with JVM have gone bonkers: Tomcat, OPMN, Oracle, WebCache.
At first I disabled services that where failing and where not so important, but then all the others jumped to 100% CPU. It took some minutes before the situation was clear - all stuck services had one thing in common - JAVA. Once they went down, CPU went to idle. Ones who where prepared for this day did that 3 years ago. Happy restarting all the lazy admins.
žymės:
Java,
leap second,
OPMN,
Oracle,
Tomcat
Subscribe to:
Posts (Atom)