Friday, 16 October 2015

ERROR : 

SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0


SQL> ALTER DATABASE OPEN RESETLOGS;
ALTER DATABASE OPEN RESETLOGS
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

OR

SQL> startup nomount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-01507: database not mounted


SQL> alter database mount;

Database altered.

SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-00279: change 1615709 generated at 10/16/2015 01:06:27 needed for thread 1
ORA-00289: suggestion :
D:\APP\ADMIN\FLASH_RECOVERY_AREA\MCCDELO1Q\ARCHIVELOG\2015_10_16\O1_MF_1_58_%U_.
ARC
ORA-00280: change 1615709 for thread 1 is in sequence #58


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

ORA-00308: cannot open archived log
'D:\APP\ADMIN\FLASH_RECOVERY_AREA\MCCDELO1Q\ARCHIVELOG\2015_10_16\O1_MF_1_58_%U_
.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.


ORA-10879: error signaled in parallel recovery slave
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'D:\APP\ADMIN\ORADATA\MCCDELO1Q\SYSTEM01.DBF'


--------------------------------------------------------

RESOLUTION :


SQL> select instance_name, status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
mccdelo1q        MOUNTED

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
SQL> ALTER SYSTEM SET "_allow_resetlogs_corruption"= TRUE SCOPE = SPFILE;

System altered.

SQL> ALTER SYSTEM SET undo_management=MANUAL SCOPE = SPFILE;

System altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
SQL> alter database open resetlogs;

Database altered.

SQL> CREATE UNDO TABLESPACE undo1 datafile 'D:\app\Admin\oradata\MCCDELO1Q\undo1_1.dbf' size 200m autoextend on maxsize unlimited;

Tablespace created.

SQL> ALTER SYSTEM SET undo_tablespace = undo1 SCOPE=spfile;

System altered.

SQL> alter system set undo_management=auto scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
Database opened.
SQL> alter system switch logfile;

System altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Thank You.
NJ

 

Friday, 28 August 2015

SAN Versus(V/S) DAS

SAN (storage area network) is a high-speed network of storage devices that also connects those storage devices with servers. It provides block-level storage that can be accessed by the applications running on any networked servers. SAN storage devices can include tape libraries and disk-based devices, like RAID hardware.

SAN Vs. DAS Performance

Organizations often choose to deploy a storage area network because it offers better flexibility, availability and performance than direct-attached storage (DAS). Because a SAN removes storage from the servers and consolidates it in a place where it can be accessed by any application, it tends to improve storage utilization. Storage utilization improvements often allow organizations to defer purchases of additional storage hardware, which saves money and requires less space in the data center.

Thanks to high-speed connections (usually Fibre Channel), SANs often provide better performance than DAS. Also, because SANs usually offer multiple connections to and from the data center's servers, they also improve availability. In addition, separating the storage from the servers frees up the computing resources on the servers for other tasks not related to storage.

SANs Simplify Management Tasks

SANs are particularly helpful in backup and disaster recovery settings. Within a SAN, data can be transferred from one storage device to another without interacting with a server. This speeds up the backup process and eliminates the need to use server CPU cycles for backup. Also, many SANs utilize Fibre Channel technology or other networking protocols that allow the networks to span longer distances geographically. That makes it more feasible for companies to keep their backup data in remote locations.

Utilizing a SAN can also simplify some management tasks, potentially allowing organizations to hire fewer IT workers or to free up some IT workers for other tasks. It's also possible to boot servers from a SAN, which can reduce the time and hassles involved in replacing a server.

SAN Alternatives

Before the advent of SANs, organizations generally used direct-attached storage (DAS). As the name implies, direct-attached storage is directly attached to the server, residing either on the server or in a standalone storage device that is not part of a separate storage networking environment. Many smaller organizations continue to use DAS today because it offers lower upfront costs than deploying a SAN. However, for larger companies, the benefits of a SAN often outweigh the costs.

Sometimes people confuse the term SAN with the term NAS, which stands for "network-attached storage." The key to distinguishing the two lies in the last term of each acronym: a SAN (storage area network) is an actual network, while NAS (network-attached storage) refers to a storage device, typically in an IP network. While SANs provide block-level storage for servers, a NAS device provides file-level storage for end users. For example, the mail application on your company servers might utilize a SAN to store all the messages, contacts and other data it requires; by contrast, an end user would use a NAS device to save files, such as word processing documents or spreadsheets. Operating systems see a SAN as a disk, while they see a NAS device as a file server.

Making things somewhat more confusing, some storage systems take a hybrid approach, offering some SAN capabilities as well as some NAS capabilities. It's also possible to include NAS devices within a SAN.
SAN Implementation

To set up a simple SAN, you need only three major components: a SAN switch, a storage device and a server. You'll also require cables to connect the various elements together and SAN management software. In most real-world settings, a SAN will include many different switches, storage devices and servers, and it will likely also include routers, bridges and gateways to extend the SAN over large areas and to connect to other parts of the data center network. The SAN's topology will depend on its size and the needs of the organization.

The process of deploying a SAN requires several steps. First, you need to design your SAN, taking into account your current needs and future scalability requirements. Second, you'll need to select a vendor or vendors to provide the hardware and software you'll need, as well as any related services. Next, you'll install the necessary hardware and then install and configure the software for managing your SAN. Deploying a SAN is a complicated process that often requires specialized knowledge and a great deal of planning, particularly if your SAN is very large.

SAN Technology

Several different industry groups have developed standards related to SAN technology. The most prominent is probably the Storage Networking Industry Association (SNIA), which promotes the Storage Management Initiative Specification (SMI-S), as well as related standards. The Fibre Channel Industry Association (FCIA) also promotes standards related to SAN and administers the SANmark Qualified Program.

Fibre Channel is currently the most widely used communication protocol for SANs, but it is by no means the only one. Some SAN networks rely on iSCSI communication, a mapping of SCSI protocol over TCP/IP. SANs can also use ATA over Ethernet (AoE), Fibre Channel over Ethernet (FCoE), ESCON over Fibre Channel, HyperSCSI and some other protocols.

Cheers to SAN!

Thanks
NJ



 
Network Attached Storage (NAS) V/S Storage Area Network (SAN)

Network Attached Storage (NAS)

Definition - What does Network Attached Storage (NAS) mean?

Network attached storage (NAS) is a dedicated server, also referred to as an appliance, used for file storage and sharing. NAS is a hard drive attached to a network, used for storage and accessed through an assigned network address. It acts as a server for file sharing but does not allow other services (like emails or authentication). It allows the addition of more storage space to available networks even when the system is shutdown during maintenance.

NAS is a complete system designed for heavy network systems, which may be processing millions of transactions per minute. NAS provides a widely supported storage system for any organization requiring a reliable network system.
Techopedia explains Network Attached Storage (NAS)

Organizations looking for the best, reliable data storage methods, which can be managed and controlled with their established network systems, often choose network attached storage. NAS allows organizations and home computer networks to store and retrieve data in bulk amounts for an affordable price.

The following three components play an important role in NAS:

    NAS Protocol: NAS severs are fully supported by the network file system and common interface file system. NASs also support different kinds of network protocols including SCP and File Transfer Protocol (FTP). However, over TCP/IP, communication can be done more efficiently and reliably. The initial purpose of NAS design was only file sharing over UNIX across a LAN. NAS also strongly supports HTTP. So users/clients can easily download the stuff directly from the Web if NAS is connected to the Internet.
    NAS Connections: Different mediums are used for establishing connections with NAS servers, including: Ethernet, fiber optics and wireless mediums with 802.11 standards.
    NAS Drives: Any technology can be used for this purpose, but SCSI is used by default. ATA disks, optical discs and magnetic media are also supported by NAS.


Storage Area Network (SAN)

Definition - What does Storage Area Network (SAN) mean?

A storage area network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage. An SAN makes a network of storage devices accessible to multiple servers. SAN devices appear to servers as attached drives, eliminating traditional network bottlenecks.

SANs are sometimes also referred to (albeit redundantly) as SAN storage, SAN network, network SAN, etc.
Techopedia explains Storage Area Network (SAN)

Introduced in the early 2000s, SANs were initially limited to enterprise class computing. Today, high-speed disk costs have gradually dropped and SANs have become a mainstay for greater organizational storage.

SAN implementation simplifies information life cycle management and plays a critical role in delivering a consistent and secure data transfer infrastructure.

SAN solutions are available as two types:

    Fiber Channel (FC): Storage and servers are connected via a high-speed network of interconnected fiber channel switches. This is used for mission-critical applications where uninterrupted data access is required.
    Internet Small Computer System Interface (iSCSI) Protocol: This infrastructure gives the flexibility of a low-cost IP network.

Both provide advantages based on business requirements.

The advantages of SAN include:

    Storage Virtualization: Server capacity is no longer linked to single storage devices, as large and consolidated storage pools are now available for software applications.
    High-Speed Disk Technologies: An example is FC, which offers data retrieval speeds that exceed 5 Gbps. Storage-to-storage data transfer is also available via direct data transmission from the source to the target device with minimal or no server intervention.
    Centralized Backup: Servers view stored data on local disks, rather than multiple disk and server connections. Advanced backup features, such as block level and incremental backups, streamline IT system administrator responsibilities.
    Dynamic Failover Protection: Provides continuous network operation, even if a server fails or goes offline for maintenance, which enables built-in redundancy and automatic traffic rerouting.

SAN is offered by server manufacturers, such as IBM and HP. Server-independent SAN providers include EMC and Network Appliance.

Play with Storages and dont blame the network! :D :D

Thanks
NJ

 

Wednesday, 26 August 2015

How to Setup an ArcSDE Trace for API Developers troubleshooting ?



The trace corresponds to client API calls, not the calls made inside the server executive to the database itself. 


The use of the trace environment is simple



Create following two environment variables:



          SDETRACELOC=C:\TEMP1\trace



          SDETRACEMODE=vf



Now launch ArcGIS Desktop.  You should disable the trace by removing or altering the SDETRACELOC variable.



And to know what is happening between the SE_stream_execute and the first SE_stream_fetch you'll have  to use the Oracle's trace.


So this is how we setup SDE Trace.


Thanks.
NJ




 

Saturday, 22 August 2015

How to check the REDO logfile estimation ? How to check that what should be the size of REDO logfile?

Use the following command to check the REDO logfile estimation :


set lines 2000 pages 2000
select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;





Explanation :


The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.

Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle Database automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager.

It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of 100 MB to a few gigabytes are considered reasonable. Size online redo log files according to the amount of redo your system generates. A rough guide is to switch log files at most once every 20 minutes.

Happy Blogging :)

Wednesday, 12 August 2015

Slow ArcSDE Performance Troubleshooting

Why is my ArcSDE geodatabase running so slow?  Is it because I have too much data?  Probably not.  That’s why you bought the big enterprise DBMS, isn’t it?  Whether its a direct connection or an application server connection, sluggish SDE performance will happen from time to time.

Update statistics and rebuild indexes:

A good thing to get into is to analyze any new feature class or table brought into the geodatabase.  However, see this link to rebuild indexes and update statistics for the entire GDB……and add it to your GDB maintenance routine:

FAQ:  How can ArcSDE performance be improved?

Compress:

Compressing (not reck/post) moves edits stored in the Delta tables into the base table as well as remove any unreferenced states. Even if you’re a small shop with few editiors, letting these tables grow unmanaged will wreak havoc on performance over time.  Also, you don’t necessarily need to disconnect users, delete versions or unregister replicas to benefit from a compress.

Compressing an ArcSDE geodatabase helps maintain database performance by removing unused data.
Specifically it does two things:
  • First, it removes unreferenced dates, and their associated delta table rows.
  • Second, it moves entries in the delta tables that are common to all versions into the base tables, thus reducing the amount of data that the database searches through when executing queries. In effect, a compress will improve query performance and system response time by reducing the depth and complexity of the state tree.
When a large volume of uncompressed changes have accumulated in an ArcSDE geodatabase, a compress operation can take hours or even days. This is another very common cause of poor performance. To avoid this, you should compress on a regular basis (daily, weekly, and after periods of high editing activity). Users can stay connected to the geodatabase during a compress, but we suggest that all users be disconnected for the compress operation to be fully effective.
Remember to update statistics before and after a compress, and note the one exception mentioned earlier. The compress command is available in ArcCatalog. You add the command from the Customize dialog box, and you must be connected as the SDE user to execute it, or you could execute a compress with SDE commands.

The geodatabase compress operation

HowTo:  Compress a versioned database to state 0

 Five Best Practices for Maintaining an ArcSDE Geodatabase

 

Direct Connections(2 Tier) V/S the ArcSDE Service Connections(3 Tier):

Old habits are hard to break and the 3-tier application (ArcSDE) service is certainly one of them. When I know I have to use SDE commands for troubleshooting, I almost always set up a service so I don’t have to type the connection string repeatedly.
With that being said, for everyday use, the direct connection really is the way to go.  Consider this a choice between geodatabase transactions processed over the network or those same geodatabase transactions processed on the client machine.  Most computers today have more than enough processing power to handle this and other than the text in  the connection properties, ArcSDE functionality remains unchanged.
Check out this great link for more details:


Why should I be making direct connections to an ArcSDE geodatabase?

Here are a couple of direct connect syntax examples to get you started:

Sql Server – sde:sqlserver:<server_name\instance_name>
Oracle with ArcSDE 10 – sde:oracle11g*:<net service name>   * or oracle10g
Oracle pre v.10 – sde:oracle10g:\;LOCAL=<Oracle SID>
PostgreSQL – sde:postgresql:<server_name> 
 

Thanks.

NJ

$SDEHOME/etc – Understanding SDEHOME and its components

Whether it’s $SDEHOME or %SDEHOME% the “etc” folder contains a wealth of information that, when encountering an error,  may point out something silly and easy to fix or at least lead you in the right direction.

dbinit.sde

The dbinit.sde file is read each time the ArcSDE instance starts.  This file can be used to set environment variables for error logging, location paths, user names, passwords and more.  Here are two environment variables to enable a client intercept log.

set SDEINTERCEPT=TRUE

set SDEINTERCEPTLOC= “C:\Temp\client_intercept”

Environment variables

dbtune.sde

This file contains the configuration keywords and their specified values.  Typically, the default parameters are acceptable but it’s possible to create new keywords or change the default values. This is a topic unto itself so I’ll leave it be for now.   Have a look at these links for more details:

What is the DBTUNE table?

What are DBTUNE configuration keywords and parameters?

giomgr.defs

This file updates the sde.server_config table in the database.  Most of the initialization parameters in this table should not need to be altered from their default settings, except possibly the TEMP location on Windows installations and MINBUFFSIZE and MAXBUFFSIZE, which can be adjusted to improve data loading performance.

The TCPKEEPALIVE parameter is the value I seem to change the most.  Setting this to TRUE can help avoid orphaned gsvr processes which can hog network resources and prevent additional connections.  Here’s an example to change the TCPKEEPALIVE parameter to true with the sdeconifg command:

C:\> sdeconfig -o alter -v TCPKEEPALIVE=TRUE -i <service> -D <database_name>


ArcSDE Command Reference

services.sde

This file stores the name and TCP/IP port number for the ArcSDE service.  Unix machines will always pull information from this file to connect.  Windows machines will only use this file when starting a service with the sdemon command.  The Windows services file can be found in the %windir%\System32\drivers\etc directory.
#
# ESRI ArcSDE Remote Protocol
#
#esri_sde 5151/tcp

sde_<service_name>.log,  goimgr_<service_name>.log

The giomgr listens for requests to connect to the database.  When the request is received, the giomgr spawns a gsrvr process for that client.  When a service fails to start or if the giomgr fails to cough up a gsrvr, a brief description of the problem and an error code will appear in this file.  Have a look at this link describing ArcSDE error return codes.

Return codes

sdedc_<database_name>.log

This contains connection information and info on specific commands and reports errors in connection initialization.  Much like the service log, this will report back what’s happening during a direct connection.

    Tip:  If you have an etc directory in your ArcGIS installation location, the file is written there. If you have neither an SDEHOME variable or etc directory, the log files are written to the temp directory.


Thanks.
NJ
ArcSDE User Permissions ESRI 10.2
 

The SDE user is responsible for several geodatabase maintenance tasks including compression and version management among others.  The SDE user also owns the geodatabase system tables, triggers and procedures.   The DBMS privileges necessary to manage and alter the geodatabase are granted to the SDE user upon creation of the geodatabase during the post installation process.  Therefore it is absolutely unnecessary to grant every permission and add the SDE user to every admin role.  Sometimes it can even have an adverse effect on performance an usability.  The SDE user will only need membership in the Public role.

It is recommended that the ArcSDE administrator and its schema only be used to manage and store ArcSDE system tables. You should create separate user schemas in which to store your ArcSDE data objects, such as feature classes and raster datasets. You should not store these objects in the ArcSDE administrator’s storage space, since you could possibly crash the ArcSDE service by filling up the ArcSDE administrator’s space. Following the practice of storing only system tables in the ArcSDE administrator’s storage space simplifies the management of ArcSDE.

Another good idea is to create a user with the necessary privileges to create and upgrade geodatabases.  Like the SDE user, this account should not be used to create data.

    User privileges for geodatabases in Oracle
    User privileges for geodatabases in SQL Server
    User privileges for geodatabases in PostgreSQL

Basically, SYSADMIN or SERVERADMIN or GRANT DBA TO SDE sound really cool and powerful, it is 100% not necessary.  I ask for just a little faith in the trusty old post-install wizard.  Faith that it will grow your SDE user and make it the best little geodatabase administrator it can be.


Thanks.
NJ
ArcSDE Connection Errors 

The link here will provide you detailed information about all the errors in ArcSDE and the important one's are given below :

"SDE not running on server"

If a service is your preferred connection method, you may be familiar with this error.

This simply means that the ArcSDE service has not been started (or has stopped).  Check the Connection Properties dialog  and make sure the correct service name or port number is specified properly.  If you’re confident this right, you can check the status of the service one of two ways.  For Windows users, open the service panel by right clicking on My Computer and clicking Manage (or click Run>services.msc).  If the service is not started, click start and check.

Linux users will prefer to use the SDE command sdeservice -o list.    The -i parameter can be used to specify a particular service.  If the status returns NOT_STARTED, use the sdemon -o start command to fire it up.

SDE Command Reference

http://help.arcgis.com/en/geodatabase/10.0/admin_cmds/Support_files/whnjs.htm



“Server library could not be loaded”

The Desktop 10 help states that “This message is usually returned when SDEHOME is set incorrectly in your system path variable or as its own environment variable”.  This reason is certainly valid and common, but it’s usually the direct connect syntax.  Ensure the server name is spelled right and the direct connect string follows the specified standards for you DBMS.

About the path….

A good tip is to check the path variable (in a command prompt type path).   Ensure the SDEHOME path is correct and is also the first entry in the series of paths.   Also, look at the actual %SDEHOME% variable.  Look for extra slashes or backwards slashes.  Forward slashes that should be back slashes or vice versa.  There is also the ever elusive semicolon.



“Bad user login”

This isn’t always as simple as it seems.  However, it usually means the user name or password was entered incorrectly.  If you’re sure everything is typed right……..

    Check what ArcSDE instance was specified for the connection. If the wrong ArcSDE service, direct connection syntax, or database name was supplied, authentication is occurring against another database or server, which may not have this user in it.
    If you are using operating system authentication, be sure the login is recognized by the DBMS.
    If you are connecting to a geodatabase in SQL Server, check to see if the complex password policy is set in the SQL Server database. If so, be sure the user’s password meets the requirements of the password policy. If it does not, the user cannot connect to the database.



If a service or direct connection on either OS fails to start manually, it may be a good time to call Esri support.  Be sure to note any error messages and navigate to the SDEHOME\etc folder and copy out the log files.  If you’re feeling adventurous, look at the log files, note down the errors and look up the return codes.  It sometimes takes a little deciphering but the info needed to solve the problem is out there.

Thanks,
NJ
How to Synchronize users in bulk to a newly created geodatabase in SQL Server

There is often the need to create or restore a geodatabase in SQL Server, whether it is for testing or a new production database.  Using the sp_change_users_login procedure works well to synchronize one or two users.  When there are a lot more, this method may not be the best way to go.

Follow these simple steps to synchronize your users in bulk:

In SQL Server Management Studio:

    Right click on the original database > Tasks > Generate Scripts.
   
    You only want to sync users so only check ‘Users’ in the Choose Specific Objects section.
   
    The Set Scripting Options section allows you to specify where you want to save the SQL output.  In most cases I send it to the clipboard.
   
    Click next a few times and then finish.
   
    Open a new query window and paste (or load) in your newly generated SQL script.  You will need to change the USE [‘original_db_name’] at the top to match the new database you want to sync the users to.
   
    Execute the script.
   
    Now make some test connections as the newly synchronized users .
   
Thanks.
NJ
Problem:  Poor compress performance

http://support.esri.com/en/knowledgebase/techarticles/detail/20428


Thanks.
 
A guide to geodatabase replication (Distributed Geodatabase)

Creating a replica is a great way to share a part or all of your geodatabase with other users.  A replica can be more than just a copy of your geodatabase, it is essentially a version of your geodatabase with the ability to synchronize changes.  This needs to be well planned and managed as it can get complicated, especially in a disconnected environment.

Here are a few things to think about before you create a replica:

    Plan carefully which data needs to be replicated, does it participate in advanced geodatabase functionality such as geometric networks and topology?  Should supporting data be replicated as well?  Try to make your replica future proof if possible so you don’t have to recreate it too often.

    Do you need to synchronize in both directions or will 1 way be enough?

    If you plan on having a large amount of data in your replica, consider exporting and loading the data into your child geodatabase before you create the replica.  Then create a replica with the option to register existing data only.  This means you will only have to synchronize small amounts of data.  I recommend this because there is always the possibility of corruption when sending large amounts of data over a network, especially if it is over the internet.

    A two way replica is only available when both geodatabase are ArcSDE geodatabases. The data must also be registered as versioned without the option to move edits to base.

    All replicated data must have a Global ID column.  This can be created using the ‘Add Global ID’s’ tool.  If you plan to use the register existing data only option, you should give the data Global ID’s before copying it to the other geodatabase.

There are many ways to distribute your geodatabase, from a check out replica with a local file geodatabase up to a full complex two way disconnected replica. Depending on your requirements, you can even use ArcGIS Server and then create a replica from your ArcGIS Server service.  This is a subject for the future.

Here is a good starting point from the Esri Resource Center to help you learn more about creating replicas:

Working with geodatabase replicas

http://resources.arcgis.com/en/help/main/10.2/index.html#/Working_with_geodatabase_replication/003n000000v1000000/


Thanks
NJ
Useful Resources for Rebuilding Indexes and Updating Statistics in an SDE Geodatabase for SQL Server and ORACLE :

Two tasks found in all Esri supported RDBMS’ are updating DBMS statistics and rebuilding indexes. If you are loading data in bulk or running query intensive DML operations with SQL these tasks will need to be done often. Fortunately for us, these are relatively straightforward to accomplish.

The scripts found here provide the SQL to both update statistics and rebuild indexes. Oracle databases should be set to automatically update stats. The Sql Server script is still applicable for geodatabases in Sql Server 2008, 2008R2 and 2012 even though it says Sql Server 2000/2005.

Also, if you are using ArcGIS 10.1 there are some new geoprocessing tools that can do this.
ArcGIS 10.1 – ArcToolBox:

Data Management Tools>Geodatabase Administration>Analyze Datasets

Updates database statistics of base tables, delta tables, and archive tables, along with the statistics on those tables’ indexes. This tool is used in enterprise geodatabases to help get optimal performance from the RDBMS’s query optimizer. Stale statistics can lead to poor geodatabase performance.

Data Management Tools>Geodatabase Administration>Rebuild Indexes:

Updates indexes of datasets and system tables stored in an enterprise geodatabase. This tool is used in enterprise geodatabases to rebuild existing attribute or spatial indexes. Out-of-date indexes can lead to poor geodatabase performance.

No matter which one you choose, these resources can be run as automated tasks as SQL or the ArcToolBox tools can be exported to Python and automated as well. Either way, do it.



Thanks
NJ
HOW TO SETUP SDE INTERCEPT ON DESKTOP SDEINTERCEPT VIDEO ARCSDE ARCGIS ESRI 10.2 

https://www.youtube.com/watch?v=y1xK8tzkazQ&feature=youtu.be


Thanks,
NJ

How to create a Batch File for SDE Intercept and read the SDE intercepts.

1.  Open your notepad and paste this:

set SDEINTERCEPTLOC=C:\TEMP\client_Intercept
set SDEINTERCEPT=cwrTf
@echo off
start /d "C:\Program Files\ArcGIS\Desktop10.2\bin\" Arcmap.exe

Please make sure you have the path to your temp folder (to contain the trace) and the path to ArcMap are correct.

2.  Save the file with the .bat extension.

intercept.bat

3.  Run the batch file.

In this example, the SDEINTERCEPT and SDEINTERCEPTLOC environment variables are set for the current sesssion and ArcMap is launched.  Make a connection to an SDE geodatabase and the client_intercept files should appear in the SDEINTERCEPTLOC you specified.

Thanks,
NJ


Oracle: bitmap conversion to rowid 

One of my colleagues recently have a question about bitmap conversion in Oracle. Here is something for you my friend.


Question:  I have a query against tables without a bitmap index, and get I see "bitmap conversion to ROWID" in the execution plan.  How does bitmap conversion to ROWIDS work?  I want to understand optimizer behavior i.e. BITMAP CONVERSION TO ROWIDS and BITMAP CONVERSION FROM ROWIDS. What does this mean and how can i change it to simple rowid access?


Answer: The "bitmap conversion to ROWIDS" execution plan step was generally introduced in Oracle 9i when the default for _b_tree_bitmap_plans changed from "false" to "true".  The "BITMAP CONVERSION (TO ROWIDS)" plan is not always an optimal step when converting from b-tree indexes, and it can be very inefficient access plan in some cases.


Bitmap conversion to ROWIDS does not require bitmap indexes, and it's sometimes found in cases of SQL with complex WHERE clause conditions.



The bitmap conversion to rowids is sometimes seen with star transformations (by setting star_transformation_enabled = true).  You can also turn-off bitmap conversion to ROWIDS in your init.ora by re-setting this hidden parm:



_b_tree_bitmap_plans=false



You can also turn-off bitmap conversion at the session level, for testing:



alter session set "_b_tree_bitmap_plans"=false


As always, notify Oracle technical support before employing any hidden parameters, as they can make your database unsupported.

Notes:

There is some Oracle 9i bugs ( bug 2546446 and bug 2742886 ) that invoke inappropriate "BITMAP conversion to ROWID" access path.


Thanks,
NJ

Friday, 7 August 2015

SGA VS PGA 

---------------------------------------------------------------------------------


SGA (System Global Area) is an area of memory (RAM) allocated when an Oracle Instance starts up. The SGA's size and function are controlled by initialization (INIT.ORA or SPFILE) parameters.
Contents


In general, the SGA consists of the following sub-components, as can be verified by querying the V$SGAINFO:

SELECT * FROM v$sgainfo;

The common components are:

* Data buffer cache - cache data and index blocks for faster access.
* Shared pool - cache parsed SQL and PL/SQL statements.
* Dictionary Cache - information about data dictionary objects.
* Redo Log Buffer - committed transactions that are not yet written to the redo log files.
* JAVA pool - caching parsed Java programs.
* Streams pool - cache Oracle Streams objects.
* Large pool - used for backups, UGAs, etc.


SQL> SHOW SGA
Total System Global Area 638670568 bytes
Fixed Size 456424 bytes
Variable Size 503316480 bytes
Database Buffers 134217728 bytes
Redo Buffers 679936 bytes

SQL> SELECT * FROM v$sga;
NAME VALUE
-------------------- ----------
Fixed Size 456424
Variable Size 503316480
Database Buffers 134217728
Redo Buffers 679936

The size of the SGA is controlled by the DB_CACHE_SIZE parameter.


PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process. For example, it typically contains a sort area, hash area, session cursor cache, etc.
[edit] Auto tuning
PGA areas can be sized manually by setting parameters like hash_area_size, sort_area_size etc.

To allow Oracle to auto tune the PGA areas, set the WORKAREA_SIZE_POLICY parameter to AUTO and the PGA_AGGREGATE_TARGET to the size of memory that can be used for PGA. This feature was introduced in Oracle 9i.

PGA usage statistics:

select * from v$pgastat;

Determine a good setting for pga_aggregate_target:

select * from v$pga_target_advice order by pga_target_for_estimate;

Show the maximum PGA usage per process:

select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process;

A good explanation was given on SGA on following link :







https://community.oracle.com/thread/904540




Happy blogging :)

Monday, 3 August 2015

How to create a geodatabase from post installation wizard in ArcSDE 9.3 geodatabase ARCGIS ARCSDE ESRI and using post installation wizard to create SDE user and setup repository?
 
https://youtu.be/jNNm72FidNo

How to check the version and build number of ArcMap/ArcCatalog?

https://youtu.be/DLAH52scPXs


 
How to check the version of ArcSDE software installed on your machine?

https://youtu.be/t890ZS0L1BA

 
How to upgrade an ArcSDE Oracle geodatabase?

https://youtu.be/EWtdbA4BsD0
How to Compress an Oracle ArcSDE geodatabase?

https://youtu.be/rj1G0OOD5nw


How to create an ArcSDE Oracle geodatabase ?

Part 1

https://www.youtube.com/watch?v=7O9mAHQoR6A

Part 2

https://www.youtube.com/watch?v=xEM3IExl-ms

 
How to Perform a non versioned editing in Arc Map ?

https://youtu.be/u0G2r4ohVUY


How to Perform a versioned editing in Arc Map ?


https://youtu.be/TEaKSSS9phs

Thank You.