RAC Installation on VmWare (Openfiler for SAN)

Installing RAC ORACLE
Installing of OPEN FILER


When your opening nodes
1) OPEN SAN
2) Than NODE 1
3) Than NODE 2

When you closing nodes
1) NODE 2
2) NODE 1
3) SAN

1)      Create new virtual machine
2)      custom
3)      Install disc image file
4)      Add open filer
5)      other (operating system)
6)      other (version)
7)      San name
8)      location which I want to store database_machine (E) create new folder san_mach
9)      1 processor
10)   512MB (1GB is recommended)
11)  use host-only networking
12)  go with recommended next twice
13)  10GB hard Disk, Store virtual disk as a single file
14)  Finish
15)  power on virtual machine
16)  click inside than enter for graphical mode
17)  Skip
18)  next
19)  us.English keyboard
20)  manually partition
21)  erase all data (YES)
22)  click free than new partition
23)  / (2000)
24)  /tmp (2000)
25)  /var (2000)
26)  /usr (2000)
27)  swap (double of RAM) 1024
28)  Rest of it leave it blank
29)  Yes take memory from swap
30)  click dhcp edit
31)  uncheck DCHP
32)  check active on boot
33)  Ip address 147.43.0.5
34)  Netmask 255.255.0.0 (Configuration this will become you SAN IP ADDRESS)
35)  hostname manually any name san.dba.com
36)  click continue
37)  choose location (asia kolkuta)
38)  root password: Redhat

39)  click next to begin installation
40)  do not reboot before adding the hard disk
41)  right click on san (machine name) go to settings
42)  click on hard disk add
43)  hard disk recommended settings
44)  create virtual disk
45)  50GB & store as a single file
46)  next, finish than okay
47)  reboot
48)  login
49)  root username
50)  redhat
51)  fdisk -l should show 53.6GB




Steps to installing of LINUX

1)      create new virtual machine
2)      custom advanced
3)      next
4)      install disc imaage (check for OEL linux folder)
5)      virtual machine name (RAC 1)
6)      browse location database machine (E) create new folder rac1_mach
7)      1 processor is enough
8)      2GB RAM
9)      use host-only networking
10)  go with recommended next twice
11)  create new virtual disk
12)  60GB of hard disk & store virtual disk as a single file
13)  customize hardware
14)  click network adapter & add
15)  click host-only
16)  close
17)  NETWORK Adapter should be host-only,host-only
18)  finish
19)  click enter for graphical mode
20)  click skip
21)  welcome page click next
22)  language & keyboard english
23)  erase all data yes
24)  create custom layout
25)  click free new
26)  / 10000
27)  /usr 5000
28)  /tmp 5000
29)  /var 5000
30)  swap 4098 (double of RAM)
31)  /u01 fill the rest of the space
32)  next
33)  next
34)  configure network devices (edit)
35)  check enable ipv4 support and uncheck ipv4 support
36)  This is for public ip address
37)  manual configuration
38)  IP.address 147.43.0.1
39)  netmask 255.255.0.0
40)  click edit again and check enable ipv4 support and uncheck ipv4 support
41)  This is for private ip address
42)  manual configuration
43)  ip.address 192.168.0.1
44)  netmask 255.255.255.0
45)  set the hostname
46)  rac1.dba.com
47)  next no need to give dns or gateway
48)  select asia/kolkata for location
49)  root password: redhat
50)  Customize now
51)  next
52)  check all the packages in every category
53)  next
54)  click next to begin installation
55)  reboot
56)  should see welcome screen
57)  yes for license agreement
58)  disabled firewall
59)  selinux settings disabled
60)  date & time
61)  dont create user forward & continue
62)  finish
63)  Install VM ware Tools
HOW TO INSTALL VM ware tools
Make sure you install in home location & ROOT


1)      tar -xzvf  /media/VMare/Tools/VMare-tools-9.6.1.-13.tar.gz
2)      You will get a vmware tools directory
3)      cd vmware-tools-distrib
4)      ls
5)      ./vmware-install.pl (GREEN COLOUR File)
6)      Press enter all the times for default settings
7)      After completing reboot (init 6)


After we configure the 1st node RAC 1 we have to configure RAC 2 node
The best way to do that would be cloning but remember to change the ip addresses & hostname
For cloning the node its has to be powered off make sure you power off Rac1
Steps to clone RAC 1


1)      First make sure RAC 1 is power off
2)      right click RAC 1 select manage than clone click on clone & you will get 1 wizard
3)      The current state make sure you clone that
4)      Create full clone
5)      Name: Rac 2
6)      Location create new folder in E drive (rac2_mach)
7)      After cloning power on all the machines RAC1 & RAC2


After cloning go to RAC 2 and change the ip addresses & hostname
Steps to change the ip addresses for RAC 2


1)      neat command to open up a terminal
2)      click on eth0 & edit it
3)      go to static set ip addresses
4)      Address: 147.43.0.2
5)      subnet mask 255.255.0.0
6)      click on eth1 & edit it
7)      Address: 192.168.0.2
8)      Subnet mask 255.255.255.0
9)      make sure both are in active mode
10)  click on DNS to assign hostname
11)  hostname: rac2.dba.com
12)  click file & than save
13)  than file & quit
14)  type ifconfig to check if its changed
15)  Go to Rac1 & check the ip addresses
16)  ifconfig

After cloning & changing the ip addresses & hostname you have to create LUN

Have to create LUN on rac1

Steps on creating the LUN

1)      Use the san ip address & do on rac1
2)      firefox https://147.43.0.5:446/
3)      Add the exception
4)      get certificate
5)      confirm security exception
6)      You will get openfiler on the screen
7)      username: openfiler
8)      password: password
9)      Click on system
10)  scroll down
11)  network access configuration
12)  name: rac1.dba.com
13)  network/host 147.43.0.1
14)  network mask 255.255.255.255
15)  update than do another one
16)  name: rac2.dba.com
17)  network/host 147.43.0.2
18)  network mask 255.255.255.255
19)  than update
20)  click on Volume (main heading)
21)  click on create new physical volumes
22)  click on second one /dev/sda (should have 50gb)
23)  scroll down & click on create
24)  go to the right side & click manage volumes
25)  click on volume group name (volgrp) any name
26)  check the /dev/sda1 & click add volume group
27)  Go to Service (main heading)
28)  isCSI target server click on enabled
29)  Go back to Volumes & click add volume on the right side
30)  scroll down & create volume in volgrp
31)  volume name: vol1
32)  volume description: for rac
33)  required space (MB) go all the way to the right use full space
34)  filesystem/volumetype: iSCSI
35)  Than click create should show you green color
36)  click on right side iSCSI targets
37)  Click on Add target configuration heading
38)  click on LUN mapping sub heading & than click map
39)  Click on network ACL heading & both should be allow
40)  than click update
41)  after click logout at the top

How to detect the LUN in both nodes
Steps
Give ip address of SAN


1)      iscsiadm -m discovery -t st -p 147.43.0.5
2)      service iscsi restart
3)      fdisk -l (should show you size 50 GB)
4)      Go to RAC 2
5)      fdisk -l
6)      iscsiadm -m discovery -t st -p 147.43.0.5
7)      service iscsi restart
8)      fdisk -l

After Lun Mapping you have to create 2 partitions for OCR VD & Databases
From Root Steps
Create partition from RAC1 but do partprobe from both nodes


1.      # fdisk   /dev/sdb
2.      m
3.      n
4.      p
5.      1
6.      enter
7.      +10G for OCR & VD
8.      p
9.      n
10.  p
11.  enter
12.  +40GB for databases
13.  p
14.  w
15.  partprobe  /dev/sdb for both nodes


Now check the ip address on both nodes
ifconfig

Go to the host file of RAC1


1.      vim /etc/hosts.allow
2.      Starting 3 lines do not delete
3.      127.0.0.1         localhost.localdomain             localhost
4.      ####PUB-IP
5.      147.43.0.1       rac1.dba.com               rac1
6.      147.43.0.2       rac2.dba.com               rac2
7.      ####PRIV-IP
8.      192.168.0.1     rac1-priv.dba.com       rac1-pri
9.      192.168.0.2     rac2-priv.dba.com       rac2-pri
10.  ####VIP
11.  147.43.0.10     rac1-vip.dba.com        rac1-vip
12.  147.43.0.20     rac2-vip.dba.com        rac2-vip
13.  ####SCAN-IP
14.  147.43.0.50     rac-scan.dba.com        rac2-scan
15.  Wq

After this copy file to RAC2

FROM ROOT

# scp -v /etc/hosts rac2:/etc/hosts

Enter Password for RAC 2

open new terminal & configure ssh for rac 2

# ssh rac2

enter password for RAC2

go to RAC 2

vim  /etc/hosts

After these steps add the groups if allready created delete

For password less connectivity use same password for both RAC1 & RAC2

1.      userdel -r oracle
2.      groupdel dba
3.      groupdel oinstall
4.      groupadd  -g 5001 oinstall
5.      groupadd -g 5002 dba
6.      groupadd -g 5003 asmadmin
7.      useradd -u 5004 -g oinstall -G dba,asmadmin -d /u01/home -m oracle
8.      chown -R oracle:oinstall /u01
9.      chmod -R 775 /u01
10.  passwd oracle should be same for both nodes
11.  oracle
12.  xhost +


Same steps in RAC 2
1.      userdel -r oracle
2.      groupdel dba
3.      groupdel oinstall
4.      groupadd  -g 5001 oinstall
5.      groupadd -g 5002 dba
6.      groupadd -g 5003 asmadmin
7.      useradd -u 5004 -g oinstall -G dba,asmadmin -d /u01/home -m oracle
8.      chown -R oracle:oinstall /u01
9.      chmod -R 775 /u01
10.  passwd oracle should be same for both nodes
11.  oracle
12.  xhost +


After this configure ASM Library
In RAC1 add the software


1.      in RAC 1 click on vm
2.      settings
3.      options
4.      shared folder
5.      always enabled
6.      add
7.      next
8.      browse
9.      add
10.  select the software & click ok
11.  next
12.  finish & okay
13.  from root go to software location
14.  # cd /mnt/hgfs/software name
15.  ls
16.  will show asm library + db software & grid software
17.  need to execute the 3 rpm one by one same steps
18.  rpm -ivh (rpm name)  --force --nodeps
19.  after execute these 3 steps copy to RAC2
20.  # scp -v oraclesam* rac2:/root
21.  give password of root
22.  now go to RAC2 home location that you specified & execute the 3 rpm
23.  rpm -ivh (rpm name)  --force --nodeps
24.  back to RAC1
25.  configure ORACLE ASM LIBRARY by running root user in both nodes
26.  # oraclesam configure -i
27.  default user: oracle
28.  default group: oinstall
29.   y
30.  y
31.  oracleasm exit
32.  oracleasm init
33.  Now go to RAC2
34.  oracleasm configure -i
35.  oracle
36.  oinstall
37.  y
38.  y
39.  oracleasm exit
40.  oracleasm init
41.  now back to RAC1
42.  label the disk with name
43.  oracleasm createdisk OCR_VD /dev/sdb1 (1st partition we create)
44.  oracelasm createdisk DATA /dev/sdb2 (2nd partition we created)
45.  oracleasm listdisks
46.  Now go to node 2 no need to create disk  just file this command
47.  oracleasm scandisks
48.  oracleasm listdisks
49.  Now in both the nodes check the date gap between the two nodes should be 6 seconds if more than 6 seconds node will get rebooted
50.  RAC 1 type date
51.  RAC 2 type date
52.  now go back to RAC1
53.  mv /etc/ntp.conf   /etc/ntp.conf_bkp
54.  service ntpd restart (shoud show failed)
55.  now go to RAC2
56.  mv /etc/ntp.conf  /etc/ntp.conf_bkp
57.  service ntpd restart (should show failed)



From RAC1 switch to the oracle user & start installation of GRID SOFTWARE

1.      su - oracle
2.      cd /mnt/hgfs/ software name
3.      cd grid
4.      ls
5.      ./runInstaller
6.      skip software updates
7.      next
8.      Install & configure ORACLE GRID for a cluster
9.      Advanced installation
10.  english
11.  Here you have to specify scan name & cluster name & scan port
12.  you can leave default except scan name
13.  uncheck GNS
14.  how to find scan name open new terminal
15.  cat /etc/hosts
16.  scan name: rac-scan
17.  here you have to specify 2nd node information
18.  click add
19.  public hostname:  rac2.dba.com
20.  virtual hostname: rac2-vip.dba.com
21.  now you have to configure ssh connectivity
22.  click ssh connectivity
23.  OS username: oracle
24.  OS password: oracle
25.  than click setup
26.  it will show okay than click on okay
27.  next
28.  ORACLE ASM 1st option
29.  next
30.  now you create disk group make sure you create 2 separately one for OCR_VD & one for database because if you create one & disk group gets corrupted your cluster will be down
31.  disk group name: OCR_VD
32.  External
33.  select 1 partition ORCL OCR_VD
34.  here specify the password
35.  password: Manager1
36.  next
37.  do not use IPMI
38.  next
39.  OS GROUP NAME: oinstall
40.  next
41.  click yes
42.  here specify the location
43.  next
44.  next
45.  Fix & check again
46.  will generate 1 script copy the script in both RAC1 & RAC2
47.  after executing script on both nodes click okay
48.  Ignore ALL & than next than yes
49.  install
50.  you will get two scripts that you have to run on both nodes run the 1st script on RAC1 than on RAC2 and the second script run on RAC1 first wait for it to say complete will take 15min than run on RAC2 Node after RAC1 is completed


After completion of grid create bash profile for grid
Create bash profile from root user
1.      go to grid home
2.      cd /u01/app/11.2.0/grid
3.      pwd
4.      cd
5.      cd /u01/home (Create bash profile for grid in home location of ORACLE user)
6.      vi grid.env
7.      export ORACLE_SID_HOME=/u01/app/11.2.0/grid
8.      export PATH=$ORACLE_HOME/bin:$PATH:.
9.      wq
10.  ls
11.  chown -R oracle:oinstall   grid.env
12.  chmod -R  775   grid.env
13.  su - oracle
14.  pwd
15.  ls
16.  bash profile should be here
17.     .   grid.env
18.  crsctl  check crs
19.  All the deamons should be online or else grid software not properly installed
20.  scp  -v  grid.env   rac:/u01/home
21.  open a new terminal
22.  ssh  RAC2
23.  password of RAC2
24.  redhat
25.  su - oracle
26.  ls
27.  check if grid.env is there
28.    . grid.env
29.  crsctl check crs
30.  make sure all the deamons should be online or else grid software not properly installe


Now that grid is installed go to RAC1 node & install RDBMS software


1.      xhost + (From root user)
2.      su - oracle
3.      cd   /mnt/hgfs/11gr2p3
4.      cd database
5.      ls
6.      ./runInstaller
7.      y
8.      uncheck receive updates
9.      skip software update
10.  install database software only
11.  Oracle real application cluster database (select all nodes)
12.  english
13.  enterprise endition
14.  oracle base: u01/home/oracle
15.  oinstall for optional group
16.  ignore all & next than yes
17.  install
18.  After 95% it will give you 1 script you first have to run in RAC1 than in RAC 2
19.  run the scripts in root user first RAC1 than RAC 2
20.  you can connect to RAC2 by ssh rac2
21.  set bash profile for rdbms in home location
22.  vi rdbms.env
23.  open new tab & check oracle home location
24.  /u01/home/oracle/product/11.2.0/db_home1
25.  export ORACLE_HOME=/u01/home/oracle/product/11.2.0/db_home1
26.  export PATH=$ORACLE_HOME/bin:$PATH:.
27.  :wq
28.  Copy RDBMS environm
29.   rac2:/u01/home ent to RAC2
30.  scp -rv    rdbms.env 
31.  go to RAC2 & check
32.  ssh rac2
33.  su - oracle
34.  ls
35.  both grid.env & rdbms.env should be there
36.  run the rdbms grid environment in both RAC1 & RAC2
37.    . rdbms.env

For Disk Group Redundancy

1.      External: 1 (Just Stripping)
2.      Normal: 2-3 (Stripping g+ Mirroring)
3.      High: 3-5 (Stripping + Mirroring + Failover)



After this make sure you create the disk group for RDBMS using asmca
Steps for creating a new disk group


1.      fdisk  /dev/sdb
2.      create new partition with 40g size
3.      partprobe /dev/sdb (on both nodes)
4.      oracleasm createdisk DG /dev/sdb3
5.      su - oracle
6.       .   grid.env
7.      export ORACLE_SID=+ASM1
8.      asmca
9.      give name: dg1
10.  create
11.  External: Because you did every partition separate
12.  click ok at bottom of screen
13.  New disk group will be created



After executing rdbms environment in both nodes go to RAC1 & file dbca

HOW TO CREATE DATABASE USING DBCA

1.      dbca
2.      Oracle Real Application Cluster
3.      create a database
4.      general purpose of transaction processing
5.      GLOBAL DATABASE NAME: prod (ANY NAME)
6.      Should always be in Admin-managed
7.      select both rac1 & rac2 for instance click select all
8.      uncheck configure enterprise manager
9.      use same password for all accounts
10.  Use common location for all database files
11.  uncheck specify recovered area
12.  give size you want
13.  next
14.  finish


No comments:

Post a Comment