Category Archives: HowTo

vcsa 6.5 installer error – There is no ovftool :)

When tried to install vcsa 6.5 from my mac , i got an error like below , i guess each time install everybody will be faced with it

Copy “vcsa” folder inside the iso to related folder in the installer log ūüôā

/private/var/folders/sl/dfqvsnj548g7wsq94wrmn3tm0000gn/T/AppTranslocation/

Then get back and click next again , also in vcsa folder you will see the vcsa.ova file too !

VM

Advertisements

Parted util failed with message: Error: The primary GPT table states that the backup GPT is located beyond the end of disk

Bu hatayi ilk kez vSphere 6.5 ile VSAN testi yapmak istedigimde daha onceden tanimli olan bir RAID0 konfigrasyonunu sifirladigimda karsilastim.

Haliyle akla gelen LiveCD ile acmak , Hiren’s.BootCD kullanmak veya daha otesi Windows makineye diskleri baglayip diskleri sifirlamak gibi seyler …

Daha enteresan olan ise vmware’in vSAN kurulumlarinda ilgili diskleri sisteme ekleme durumlarinda problemle karsilasildiginda (uzerinde partition var ise) onedigi partedUtil

Ne yalan soyliyim daha once kurulum sirasinda Alt+F1 ile konsola dusup hic default username/password ne diye aklima gelememisti, kendileri root ve enter ile bos gecin oluyorlar.

Sonrasi daha guzel diskler nerede ?

ls /vmfs/devices/disks/

Diskleri naa.xxxxxx.xxxxx , partitionlari :1  , :2 olarak goreceksiniz.

partedUtil ile yapilan fix , delete gibi gereksiz denemeler hic bir ise yaramdi , hep silmeye konsantre oldugumdan uzerine yazabilecegim aklima bile gelmemisti. Ilgili diski kullanarak bir msdos partition yaratalim.

partedUtil setptbl /vmfs/devices/disks/naa.600508b1001ce940a7831ba05a5475d3 msdos

Simdi tekrar ls cekin ve partitionlar nasil gitmis gorun

Gerekli diger disklerdeki problemleri giderdikten sonra Alt+F2 ile yola devam ..

VM

Openstack Security Group ve FWaaS , sadece cli !

Openstack neutron iki farkli¬†katmanda guvenlik saglar. Ilki¬†port bazli ki instance yaratilirken oncelike port yaratilir¬†ve bu port’a atanan Security Group dogrultusunda VM-to-VM ayni L2 networkunde dair koruma saglayabilirsiniz. Ikincisi ise Router uzerinde, ne zamanki iki farkli network’u veya instance’lari internet’e ve/veya internet’den instance’lara dogru erisim soz konusu oldugunda Router uzerinde uygulanan firewall kurallari devreye girer. Baska bir deyisle Neutron NSX gibi kuzey-guney ve bati-dogu yollari uzerinde koruma saglar.

Security Groups

Security Group(SG), nova security group(NSG) uyumludur, ingress(giris)/egress(cikis) yonunde kural tanimlamanizi bunuda ilgili neutron portlarina uygulamaniza, gercek zamanli kural degisiklikleri uygulamaniza izin verir.

Davranis sekli giris yonunde sadece matched(uyan) aksi taktirde drop. Egress keza ayni fakat her yeni Security Group yaratildiginda disariya dogru tum trafik izinlidir.

Openstack uzerinde varsayili olarak “default security group” mevcut olup disa dogru tum trafik , security group icersinde tum trafik izinli olup disardan gelecek tum trafige karsi kapalidir.

Security Group sonuc olarak bir iptables uyarlamasidir fakat ML2 + OVS entegrastonu biraz karisiktir ve sirf iptables uygulanabilsin diye OVS ile instance arasinda linux bridge entegre edilmistir, ornek bir cizimi asagida gorebilirsiniz.

screen-shot-2016-10-18-at-11-25-05

Read the rest of this entry

sslvpnd can cause ha sync /Webinterface unresponsive issue? -another Fortinet story-

Today we faced Fortinet web interface become unresponsive,  we find out some articles and expect that killing/restarting httpd will be enough but we faced policy load issues for example try to list rules but we have empty response and after some time its gone and need to restart httpd to access webui again.

Then after some investigation we saw that cluster checksum is not consistent (command:diagnose sys ha cluster-csum)

Tried to sync ha config but not succeed (command:exec ha synchronize start) (for more pls check)

Then somehow we maybe did not prioritise ¬†but cluster member which web interface is working but first snmp service stop response and then sslvpn connections are start to not work ! in this time what i remember we changed the password of sslvpn user but i don’t think that this help us but when we kill the¬†sslvpnd magically non-responsive fortinet box become to run , after all checked ha csum its worked and snmp also start to work !

Actually if we did not try this (also vendor said that related firmware have a bug) we have to restart nodes and this will cause some downtime  . Version is 5.2.6

Some good link for debugging ha http://kb.fortinet.com/kb/documentLink.do?externalID=FD36494

diag  debug enable
diagnose  debug  console  timestamp enable
diag debug application hasync -1
diag debug application hatalk -1
execute ha synchronize start

When you can not kill process gently -another Fortinet story-

I expect that you know the pid but if its not you have two ways

Option 1

Walter (global) # diag test app snmpd 1

snmpd pid = 161

Option 2 (somehow related commands are not return some processes pids, then start to use fnsysctl )

List pid files then get pid id from related file

Walter (global) # fnsysctl ls /var/run/

Walter (global) # fnsysctl cat /var/run/snmpd.pid

161

Then execute (somehow diag sys kill 11 <pid_id> do not kill related pid)

Walter (global) # fnsysctl kill -9 161

thats it !

When Fortigate ips engine and AV engine fuck everything !

Since the beginning we have always trouble about choosing wrong hardwares , developer issues like handle sessions with single core , then ASIC things , its always changing¬†¬†NPx support something NPy support more , end of the day you will always have NPx/y/z/t/u what released but cost is another issue but don’t worry providers always things these are usual things. Hope one day with DPDK or another technology will help us to use firewalls without ASICs and without need always buy new hardwares. (I know you know SDDC , SDN also i know ! )

I know this device is UTM , UTM is somehow fancy things, you know you shouldn’t use , know what will happen but you have to because of some situations (for ISPs)

AV issues , IPS engine issues , conserve mode ! I don’t how you really protect my device and network behind

Big problem AV and memory , still can not understand small to bigger devices why more ram is not used ! Memory is expensive ? or maybe still 32 bit ! What developers can’t handle ?

at the end AV full the ram , whole device under stress and communication issues , BGPs or OSPF are gone ! :(((

For AV its not working like a linux services or something ? you should kill it ! Step by Step , each process , disabling AV is not helping every time

Login your device , switch to config global (it is also command ) then execute diagnose sys top see the processed press q and leave to console execute diagnose sys kill 11 <process_id> . Here we are using because maybe some outputs we could have then fallow console for mem usage.

Another issue is ips engine , so understandable command diag test ūüôā lovely , its not meaningful for me but¬†meaningful for developer or who maintain cli commands

diagnose test app ipsmonitor

You will see nice options and choose what you exactly want , restart , stop , start , get status

Also if you run cluster then consider do same things on slave ūüôā to switch slave

  • config global
  • get system ha status
  • exec ha manage 1 (mostly)

Good fixes !

VM

VisorFSObj: 1954: Cannot create file /var/lib/vmware/hostd/journal/1462853708.8163 for process hostd-worker because the inode table of its ramdisk (root) is full.

Problem : No vMotion , No possibility to start VM

in vmware.log on ESXi host is like below

VisorFSObj: 1954: Cannot create file /var/lib/vmware/hostd/journal/1462853708.8163 for process hostd-worker because the inode table of its ramdisk (root) is full.

All articles talking about snmpd but snmpd was not working for us but inodes table is still full .

Command to see inode usage is : esxcli system visorfs ramdisk list

To Solve the problem ;

You can use any way to handle it until find out this article but better solution is try to stop unused services like if its HP you can stop some of it like ams or stop CIM services , vm or something like that to free up inodes and then try to move vms on that server to go !

I know its not good solution but this is the last thing we can do to solve the problem before find better solution ūüėČ

VM

What will happen undeletable/unused vVOLs when you kick the ass of IBM Spectrum Control

Hello All

Actually all of them are my fault , sure %100

For to do a test vVOLs on IBM XIV IBM Spectrum Control installed , XIV added to it and related configurations done by it.

I dont know how but i made some mistakes about Service creation via IBM Spectrum Control and also after all test over destroyed all vCenter environment without clear something on IBM Spectrum Control and XIV but i know that i will be back (Terminator)

When i back and try to remove group pool from new installed IBM Spectrum Control, i faced an issue like below !

Screen Shot 2016-01-15 at 09.46.28

After long conversation with IBM support also find out another effect like in this link

To solve the problem the way is ;

First list the volumes from XIV (under root domain or new created domain)

vol_list managed=yes

Then delete such volumes in the error or you can delete whole vols which belong to same group pool if you will delete the group pool later

After all this i believe the instruction in link can be fallow but also another way

Login to IBM Spectrum Control , actually i restarted service of IBM Spectrum Control but i guess its no mandatory but like how Obi-Wan you should learn patience 

Then from IBM Spectrum Control remove the group pool

VM

OVS ile Biraz DB, Biraz Monitor/Troubleshoot , Biraz Openflow programlama

Simdi biraz daha ileri gidelim , uzun bir yazi oldu ama keyifli !

Oncelikle ihtiyacimiz olacak birkac sey ;

Bir onceki yazida ovsdb-client kullanarak OVS tablolarini listelemistik simdi aynisini bir benzerini asagidaki sekilde alalim.

noroot@kvm-ovs-server1:~$ sudo ovsdb-client get-schema

Not : bu komutu >  sudo ovsdb-client get-schema | python -m json.tool < seklinde kullanabilirsiniz veya ciktiyi  http://www.jsoneditoronline.org aktrip direk grafik uzerinden harekete edebilirsiniz .

Screen Shot 2015-12-15 at 17.13.10

Read the rest of this entry

Openstack Networking Ogrenmeden once bilinmesi gerekenler …

Openstack networking’i anlamadan once asagidaki konulari anlamak gerekiyor;

  • cgroup / namespace
  • Linux Networking Namespace
  • ip ve iproute2 paketi
  • Networking Namespace Yaratma
  • veth ve veth pair
  • tun/tap interface
  • Patch port

Cgroup (resource management) / Namespace (process isolation)

Cgroup / Namespace kavramlari benim container’larla beraber duymaya basladigim (ozellikle LXC , Docker ile) bir Linux Kernel ozelligi.
Cgroup konumuz ile cok alakali degil ama kisaca bilmekte fayda var, cgroup hardware kaynagini task ve kullanicilara paylastirmaya yariyor, bunu yaparkende kaynagin tahsisi, onceliklendirmesi, erisimin engellenmesi, yonetimi ve monitor edilmesini sagliyor.

Namespace ise bizim icin su an daha onemli olan process isolation’i sagliyor (tum processler sistem processlerinde ayri , sanki overlay networkler gibi 192.168.0.0/24 u tum tenantlar tarafindan kullanilabilmesi gibi ) veya baska bir deyisle basit anlamda process sanallastirma . 5,6 tip namespace var PID (process), Network (network katmani), UTS (hostname) , Mount (Disk Erisimleri) , user (UIDs) ve IPC . Muhtemelen farkli namespace lerde mevcut fakat uygulanmis degiller gibi .

Linux Networking Namespace

Mantiksal olarak var olan “network stack” kopyasi , snapshot’i , sag tus -> copy ‘ si ūüôā . Birbirinden bagimsiz route , firewall ve network device’lar (mesela loopback) . SNMP , socket , /procfs , /sysfs.

Daha gercek dunyadan dusunmek¬† gerekir ise boot etmis bir linux isletim sistemi uzerinde , eth0/em1/lo gibi interface’leri olan meseal veth0(sanal ethernet karti) gibi , sanki birden fazla container/sanal makine kosuyorda her birinin kendi ip adresi varmis gibi , iptables -L -v -n yapinca ana isletim sisteminden farkli kurallari olan , netstat -an yapildiginda yine¬† ana isletim sisteminden farkli socket lerin oldugunu dusunun.

Networking Namespace overlay networking kullanimini olagan kilar.
Read the rest of this entry