top of page

Mostly asked Interview Questions

Error "An Authentication object was not found in the SecurityContext" in load runner

Some times in our application with the web-services,during recording of the application VUGen doesn't record the authentication object and while replaying the script you get the error An Authentication object was not found in the Security Context The reason for the SOAP fault is:

"An Authentication object was not found in the SecurityContext" As mentioned by above error, the application is mostly launched via a web-link where the run time jars / dlls are downloaded locally and application is launched. During launching itself the application takes the credentials from local system (your AD account details) and authenticates the user. In case of replaying this operation in VUGen this authentication information is not available, this can be done by sending the authentication information in the header before the first web-service request as shown below: Otherwise if your web service method has property for sending authentication object then you can do so in the request itself.

Posted by raviteja gorentla

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, Errors in LoadRunner, ExtraStuff, General, Monitoring, Performance Testing, PerformanceCenter

How can we handle captcha in Load Runner?

CAPTCHA (an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge-response test used in computing to determine whether or not the user is human. The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. The purpose of the CAPTCHA is to defeat the automation. CAPTCHA are based upon the turning test and as such the main purpose is to differentiate human from machine. Pattern recognition, vocal, visual (dynamic/static) has been broken several time. Much of the time it become a pain for the end-user, because they are a pain to decipher. Cultural reference or social pattern could be proven useful, achieve the same purpose and be less annoying for your customer

If you want to script the application which has CAPTCHA with the load runner you have to get the below things from the DEV.

  • To disable CAPTCHA validation

  • To make the CAPTCHA as static

  • To remove the CAPTCHA

  • To send the CAPTCHA value in server response(So that you can correlate)

  • Provide the CAPTCHA details which are comes in the data base sequentially.

Posted by raviteja gorentla 4 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Errors in LoadRunner, ExtraStuff, General, LoadRunner, Performance Testing, PerformanceCenter, Tips n Tricks, Vugen

SSL received a weak ephemeral Diffie-Hellman key. (Error code: ssl_error_weak_server_ephemeral_dh_key) resolved

while recording any application with the load runner,or some times running in to manually we may get the below error in mozilla firefox. Secure Connection Failed An error occurred during a connection to consoleeset.soges-tech.ca:8443. SSL received a weak ephemeral Diffie-Hellman key in Server Key Exchange handshake message. (Error code: ssl_error_weak_server_ephemeral_dh_key) The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem.

Solution for the above problem:

  • Type about:config in the address bar of mozilla firefox

  • search for security.ssl3.dhe_rsa_aes_128_sha and security.ssl3.dhe_rsa_aes_256_sha

  • Set them both to false by double clicking on it.

Posted by raviteja gorentla 7 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, General, Jprofiler, Manual Testing, Monitoring, Performance Testing, Tips n Tricks

WHAT IS A REVERSE PROXY SERVER?

A proxy server is a go-between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate back-end server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers. Reverse proxy server benefits:

1.Load balancing: A reverse proxy server can act as a “traffic cop,” sitting in front of your back-end servers and distributing client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no one server is overloaded, which can degrade performance. If a server goes down, the load balancer redirects traffic to the remaining online servers.

2.Web acceleration:Reverse proxies can compress inbound and outbound data, as well as cache commonly requested content, both of which speed up the flow of traffic between clients and servers. They can also perform additional tasks such as SSL encryption to take load off of your web servers, thereby boosting their performance.

3.Security and anonymity :By intercepting requests headed for your back-end servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks. It also ensures that multiple servers can be accessed from a single record locater or URL regardless of the structure of your local area network.

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Controller, General, Performance Testing, PerformanceCenter, Tips n Tricks, Vugen

nslookup for multiple servers with a single click.

Nslookup is a command testing and troubleshooting the DNS servers. Nslookup can be run in two methods.They are interactive and noninteractive. Noninteractive mode is useful when only a single piece of data n eeds to be returned. Syntax: nslookup [-option] [hostname] [server] some times it may need to get the dns details for the large number of servers at that time we need to run the command multiple times and capture the values each and time we hit the command. To simplify this situation i have found an interesting too named as dnsdataview tool. You can nslookup multiple number of servers at a single click with the clean GUI. Download the tool here: http://www.nirsoft.net/utils/dnsdataview.zip Reference: http://www.nirsoft.net/utils/dns_records_viewer.html

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Citrix, ExtraStuff, General, JMeter, LoadRunner, Performance Testing, Tips n Tricks

Perceiver Monitoring tool

Perceiver is the new monitoring tool.This is introduced because Companies invest in enterprise applications and infrastructure to deliver optimal service to their end-user community. IT organizations are asked to manage more systems with fewer resources, while reducing costs. Performance Analysts and Capacity Planners are often asked to create volumes of custom charts and graphs for different audiences, instead of focusing on high value capacity planning and performance engineering responsibilities that provide a greater return on investment for the company.For this it is the best solution to use the BMC perceiver tool. KEY BENEFITS : User interface allows non-experts to easily access actionable data Ad hoc queries to track, view, and relate performance metrics to business applications Common interface for enterprisewide systems and applications Out-of-the-box value with BMC best practices view FEATURES : 1.Enhances decision-making capabilities by providing direct access to relevant performance data through a dynamic Web interface 2.Provides ad-hoc queries to track, view and relate detailed performance metrics to business applications 3.Increases the visibility and success of the performance organization by providing a consumer viewing tool for internal customers 4. Simplifies training and use via an easy-to-use web interface, eliminating the need for expert users and additional in-depth training 5. Maximizes the investment in BMC Performance Assurance by greatly increasing the number of direct users 6. Delivers out-of-the-box value with pre-loaded BMC Software Best Practice views including an online drag and drop editor for customization 7. Protects your performance investment by providing a performance viewing tool available across multiple platforms ABOUT BMC SOFTWARE BMC Software delivers the solutions: IT needs to increase business value through better management of technology and IT processes. Our industry-leading Business Service Management solutions help you reduce cost, lower risk of business disruption, and benefit from an IT infrastructure built to support business growth and flexibility. Only BMC provides best-practice IT processes, automated technology management, and award-winning BMC Atrium technologies that offer a shared view into how IT services support business priorities. Known for enterprise solutions that span mainframe, distributed systems, and enduser devices, BMC also delivers solutions that address the unique challenges of the midsized business. Founded in 1980, BMC has offices worldwide and fiscal 2008 revenues of $1.73 billion. Activate your business with the power of IT. www.bmc.com Source: http://discovery.bmc.com/

Posted by raviteja gorentla 2 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Controller, Errors in LoadRunner, ExtraStuff, General, LoadRunner, Manual Testing, Monitoring, Performance Testing,PerformanceCenter, Scripting, Tips n Tricks, Vugen

CPU of Load Generator Exceeded 80%

Recently i ran in to a load test and i saw the CPU of Load Generator Exceeded 80% and here are the ways to find out the root cause: There could be several things, but using 2008 machines on VMs is common. If you are using web vusers should be a small footprint.

  • Check if you have admin rights on the system or not?

  • Is CPU consumption above 80% during the entirety of the test

  • Check for any mismatch in versions in patch version

  • Try to run the agent using "Run as administrator"

  • Check for the mdrv process in the htask manager of the controller and the load generator machines while running the test

  • make sure to have the same patches level on every component, having a version mix is a real problem.

  • Please check if you have the following components installed on your LG machine McAfee Antivirus or Symantec NetBackup

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Controller, LoadRunner, Performance Testing, PerformanceCenter, Scripting, Tips n Tricks, Vugen

"Error -27778: SSL protocol error when attempting to connect with host" in load runner

Recently i had some issues with SSL protocol error,while running the scripts in controller i am facing the "Error -27778: SSL protocol error when attempting to connect with host",so finally i got rid of that by using the below steps in the script Keep the below code at the start of of script use web_set_sockets_option("SSL_VERSION", "TLS"); web_set_sockets_option("SSL_VERSION", "1"); This forces the SSL connection to the server to use version 1 of the SSL protocol rather than letting the server suggest a version during the connection handshake. We had same issue on vugen 11.50 with new patch also then we enabled the below setting Run-Time Settings -> Preferences -> Select "WinInet Replay Engine instead of Sockets (Windows Only)."

Posted by raviteja gorentla 2 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, General, Performance Testing, Scripting, Vugen

"HttpSendRequest" failed, Windows error code=12002 - Perf Center Error_load runner error

While running an internal application using the load runner Controller i was getting the "HttpSendRequest" failed, Windows error code=12002 - Perf Center Error .I solved these by following the below steps:

1.Check Vugen runs script as a process or running vusers asa thread?

You can check this in Run time settings-General-Miscellaneous-Multithreading then uncheck the value 2.Are you working with WinInet, and the error itself comes from WinInet API not specifically Vugen.We can see in part of the replay errors comes from resources during the replay where they were timed out.

If we are forced to use WinInet then this will occur, but if you can use Sockets you may want to try that option instead or a Click and Script protocol.

you can uncheck the WinInet replay instead of sockets (Windows only) in the runtime settings-preferences.This will solve the problem.

3.And the another option is to use web_set_max_retries ("X") to increase the limit of 30 sec.You can place this before the action which is failing but I wouldn't recommend that. HttpSendRequest time out only occurres when any transaction takes more than 30 sec to connect to server. This default 30 sec time is because of the use of WinInet Replay engine. Thus this error pops up when you are running script with winInet replay and transactions are taking more than 30 sec. Only thing to get rid of this error is to fine tune the whole system and check the backend and servers to see any request que has formed up.

Posted by raviteja gorentla 3 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Errors in LoadRunner, ExtraStuff, General, LoadRunner, Performance Testing, Scripting, Tips n Tricks, Vugen

LoadRunner – Script Anatomy Description

When you record and save a LoadRunner script in Vugen, there are a number of files that are created. Here’s what they are, and what they do and identification of the files you can safely delete.. Files Required for PlaybackDuring the course of recording and playback of scripts, the Vugen application will create many files, but only some of them are necessary for playback (either in Vugen or the Controller). For example, say you have script named PerformancEngineer, with two Actions, Home and Forums, then the required files you would need in the PerformanceEngineer script directory would be: * PerformanceEngineer.usr * default.usp * default.cfg * globals.h * Home.c * Forums.c * vuser_init.c * vuser_end.c * PerformanceEngineer.prm Here’s what is in each file: PerformanceEngineer.usr: Primarily, the .usr file defines which actions are used by the script. There are other properties which define which protocols are used and other settings, but most of the info default.usp: Contains the run logic for the script default.cfg: Contains the run-time settings (except for run-logic) globals.h: The global headers file- visible and editable in Vugen *.c (Action files): These are the action files containing your script code. You can edit these files in ny text editor, if you want. Sometimes it is easier than starting up Vugen PerformanceEngineer.prm: Containes the parameter definitions *.dat: Your data files, you can save these in the script directory or somewhere else, even a mapped network drive on a different server Files Created During Vugen Playback All of the files listed below can safely be deleted and not affect your ability to use the script. result1: One or more result directories are created which contain script playback results *.idx: The .idx files are binary “index” files created by Vugen for holding parameter values PerformanceEngineer.ci: combined_PerformanceEngineer.c: A list of #includes for all of your Actions logfile.log, mdrv.log: random log files which you will probably never need to look at mdrv_cmd.txt, options.txt: These text files contain commands and arguments for the script compiler and driver (mdrv) and are created dynamically, so you can safely delete them. output.txt: This one is important. This file contains all of the log messages generated during script playback. The contents of this file appear in the “Output Window” section of Vugen output.bak: A backup of the above file pre_cci.c: Output from the C pre-processor, which contains all of the functions used in your scrip, from all of the Acitons and header files. In summary, you can delete: *.txt, *.log, *.idx, *.bak, result*, pre_cci.c, combined_*, *.ci Files Created During Recording The ‘data’ directory in your script directory contains the script recording data. I usually delete this so it doesn’t get checked into my version control system, but you may want to keep it around if you use the graphical scripting mode and/or you want to compare playback vs. recording. The auto-correlation feature makes use of this data, too, but I haven’t had much sucess using that feature. (This has been referred from the site performanceengineer.com)

Posted by raviteja gorentla

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, ExtraStuff, General, LoadRunner, Performance Testing, Tips n Tricks, Vugen

HP Performance Center 12 and HP LoadRunner 12 protocol bundles

Bundle name Protocols .NET record/replay Microsoft® ADO.NET Microsoft .NET 2.0, 3.0, 3.5, and 4.0 Windows® Communication Foundation (WCF) Database ODBC Oracle (2-Tier) DCOM Microsoft COM/DCOM Developer Unit Test (nUnit, jUnit, and Selenium) SDK GUI virtual users HP Functional Testing (HP QuickTest Professional) Java record/replay Jacada Java over HTTP Vuser JMS Network Domain Name Resolution (DNS) File Transfer Protocol (FTP) Internet Message Access Protocol (IMAP) Lightweight Directory Access Protocol (LDAP) Microsoft Exchange (MAPI) Post Office Protocol (POP3) Simple Mail Transfer Protocol (SMTP) Tuxedo Windows Sockets CORBA—Java RMI—Java (includes ORMI) Oracle E-Business Oracle NCA Oracle Web Applications 11i (Click and Script) PeopleSoft Enterprise (Click and Script) PeopleSoft—Tuxedo Siebel—Web Web (HTTP/HTML) Remote access Citrix Virtual User (ICA) Remote Terminal Emulation (RTE) Remote desktop Microsoft Remote Desktop Protocol (RDP) Protocol available for HP LoadRunner only.Share with colleagues Rate this document Sign up for updates hp.com/go/getupdated Data sheet | Rich Internet applications Action Message Format (includes RTMP/AMF) AJAX Click and Script AJAX TruClient—Firefox AJAX TruClient—IE Flex Virtual User (for Adobe® Flash) Silverlight Vuser Mobile TruClient SAP SAP Click and Script SAP GUI SAP—Web SAP Mobile Platform (SMP) SOA MQSeries—Client MQSeries—Server Service Test Vuser Web Services Templates2 C Vuser C#.NET Vuser (Visual Studio add-in) C++.NET Vuser (Visual Studio add-in) Enterprise Java Beans (EJB) Java Vuser JavaScript Vuser VBScript Vuser VB.NET Vuser (Visual Studio add-in) VBNet Vuser Web 2.0 Web and multimedia, RIA and SOA (combined) Web and multimedia Media Player (MMS) Real (RealPlayer) Web (Click and Script) Web (HTTP/HTML) Mobile Applications Protocol Wireless Multimedia Messaging Service (MMS) WAP

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General, LoadRunner, Performance Testing, Tips n Tricks, Vugen

"The requested operation cannot be completed because the Terminal connection is currently busy processing a connect operation" Error solved

This is the issue where a user has disconnected from a remote server instead logging off, taking up one of the Remote Desktop sessions.Then we will get the error "The terminal server has exceeded the maximum number of allowed connections".This can be easily corrected by logging into the server in console mode and manually logging off the user. Whenever we try to connect for first time it will show the same error this is because the user was disconnected from the remote machine instead of logoff.vSo for this the user need to login to the system and log off the session that he opened previously.You can use the below commands to kill the user in remote machine.

c:\>sc \\THESERVERNAME query TermService

SERVICE NAME: TermService

DISPLAY_NAME: Terminal Services

TYPE : 20 WIN32_SHARE_PROCESS STATE : 4 RUNNING(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)) WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0

The Terminal Services was running, it can't be restarted on Server 2003 so we can take a look att the running processes: C:\>tasklist /s MYSERVERNAME /u MYUSERNAME /p MYPASSWORD (Output truncated to highlight relevant processes)

Image Name PID Session Name Session# Mem Usage

Image Name PID Session Name Session# Mem Usage

==================== ======== ============== ========= ============ System Idle Process 0 0 28 K csrss.exe 4140 Console 7 2,684 K winlogon.exe 4220 Console 7 5,840 K logon.scr 4500 Console 7 1,580 K

Looking at the processes above, I recalled an issue that could sometimes arise with the logon.scr process on Virtual Machines. Thinking that logon.scr (Process ID 4500) may be the culprit, I decided try killing the process: C:\>taskkill /s MYSERVERNAME /u MYUSERNAME /p MYPASSWORD /PID 4500 SUCCESS: The process with PID 4500 has been terminated. After seeing that the process was successfully killed, I tried logging in again and could do so successfully!

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General, LoadRunner, Performance Testing, Tips n Tricks, Vugen

How to run Ajax Click n Script in Controller?

AJAX (Asynchronous JavaScript and XML) is a technique for creating interactive Web applications. With AJAX, Web pages exchange small packets of data with the server, instead of reloading an entire page. This reduces the amount of time that a user needs to wait when requesting data. It also increases the interactive capabilities and enhances the usability.

Using AJAX, developers can create fast Web pages using Javascript and asynchronous server requests. The requests can originate from user actions,timer events, or other predefined triggers.AJAX components, also known as AJAX controls, are GUI based controls that use the AJAX technique—they send a request to the server when trigger occurs.

For example, a popular AJAX control is a Reorder List control that lets you drag components to a desired position in a list. VuGen’s support for AJAX implementation is based on Microsoft’s ASP.NET AJAX Control Toolkit formerly known as Atlas.

AJAX Supported Frameworks

The supported frameworks for AJAX functions are:

Atlas 1.0.10920.0/ASP.NET AJAX—All controls

Scriptaculous 1.8—Autocomplete, Reorder List, and Slider

VuGen supports the following frameworks at the engine level. This implies

that VuGen will create standard Web Click and Script steps, but not AJAX

specific functions:

Prototype 1.6

Google Web Toolkit (GWT) 1.4

AJAX Example Script

VuGen uses the control handler layer to create the effect of an operation on a GUI control. During recording, when encountering one of the supported AJAX controls, VuGen generates a function with an ajax_xxx prefix. In the following example, a user selected item number 1 (index=1) in an

Accordion control. VuGen generated an ajax_accordion function.

Note: When you record an AJAX session, VuGen generates standard Web (Click and Script) functions for objects that are not one of the supported AJAX controls. In the example above, the word FILE_PATH was typed into an edit box.

web_browser("Accordion.aspx",

DESCRIPTION,

ACTION,

"Navigate=http://labm1app08/AJAX/Accordion/Accordion.aspx",

LAST);

lr_think_time(5);

ajax_accordion("Accordion",

DESCRIPTION,

"Framework=atlas",

"ID=ctl00_SampleContent_MyAccordion",

ACTION,

"UserAction=SelectIndex",

"Index=1",

LAST);

web_edit_field("free_text_2",

"Snapshot=t18.inf",

DESCRIPTION,

"Type=text",

"Name=free_text",

ACTION,

"SetValue=FILE_PATH",

LAST);

Note: When you record an AJAX session, VuGen generates standard Web (Click and Script) functions for objects that are not one of the supported AJAX controls. In the example above, the word FILE_PATH was typed into an edit box.

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, General, LoadRunner, Performance Testing, Vugen

HTTP WATCH

Why do you need an HTTP Viewer or Sniffer? All web applications make extensive use of the HTTP protocol (or HTTPS for secure sites). Even simple web pages require the use of multiple HTTP requests to download HTML, graphics and javascript. The ability to view the HTTP interaction between the browser and web site is crucial to these areas of web development:

  • Trouble shooting

  • Performance tuning

  • Verifying that a site is secure and does not expose sensitive information

How can HttpWatch Help?

HttpWatch integrates with Internet Explorer and Firefox browsers to show you exactly what HTTP traffic is triggered when you access a web page. If you access a site that uses secure HTTPS connections, HttpWatch automatically displays the decrypted form of the network traffic. Conventional network monitoring tools just display low level data captured from the network. In contrast, HttpWatch has been optimized for displaying HTTP traffic and allows you to quickly see the values of headers, cookies, query strings . HttpWatch also supports non-interactive examination of HTTP data. When log files are saved, a complete record of the HTTP traffic is saved in a compact file. You can even examine log files that your customers and suppliers have recorded using the free Basic Edition.

Why HttpWatch? Seven reasons to use HttpWatch rather than other HTTP monitoring tools:

  1. Easy to Use - start logging after just a couple of mouse clicks in Internet Explorer or Firefox. No other proxies, debuggers or network sniffers have to be configured

  2. Productive - quickly see cookies, headers, POST data and query strings without having to manually decode raw HTTP packets

  3. Robust - reliably log thousands of HTTP transactions for hours or days while tracking down intermittent problems

  4. Accurate - HttpWatch has minimal impact on the normal interaction of the browser with a web site. No extra network hops are added, allowing you to measure real world HTTP performance

  5. Flexible - HttpWatch only requires client-side installation and will work with any server side technology that renders HTML pages in Internet Explorer or Firefox. No special server-side permissions or configurations are required - ideal for use against production servers on the Internet or Intranet

  6. Comprehensive - works with HTTP compression, redirection, SSL encryption & NTLM authentication. A complete automation interface provides access to recorded data and allows HttpWatch to be controlled from most popular programming languages.

  7. Professional Support - updates and bug fixes are provided free of charge on our website and technical support is available by email, phone or fax.

Uses of HttpWatch:

  1. Testing a web application to ensure that it is correctly issuing or setting headers that control page expiration

  2. Finding out how other sites work and how they implement certain features

  3. Checking the information that the browser is supplying when you visit a site

  4. Verifying that a secure web site is not issuing sensitive data in cookies or headers

  5. Tuning the performance of a web site by measuring download times, caching or the number of network round trips

  6. Learning about how HTTP works (useful for programming and web design classes)

  7. Alowing webmasters to fine tune the caching of images and other content

  8. Performing regression testing on production servers to verify performance and correct behavior

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, Errors in LoadRunner, ExtraStuff, General, JMeter, Monitoring, Performance Testing, Scripting, Tips n Tricks, Vugen

What is a HAR File and what is the use of HAR??

HAR stands for HTTP Archive.

This is a common format for recording HTTP tracing information. This file contains a variety of information, but for our purposes, it has a record of each object being loaded by a browser. Each of these objects’ timings is recorded. The HAR file format is still an evolving standard, and the information contained within is both flexible and extensible. You should expect the HAR file to include a breakdown of timings including:

  • how long it takes to fetch the DNS information

  • how long each object takes to be requested

  • how long it takes to connect to the server

  • how long it takes to transfer from the server to the browser of each object

  • whether the object is blocked or not

The data is stored as a JSON document and extracting meaning from the low level data is not always easy, but with practice, a HAR file can quickly help you identify the key performance problems with a web page, which in turn will help you efficiently target your development towards the areas that will deliver the greatest return on your efforts.

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, Errors in LoadRunner, General, LoadRunner, Performance Testing, Scripting, Tips n Tricks, Vugen

Parameterization in Load Runner

Replacing hard coded values in the script with different values is called Parameterization.

Parameterization used for :

  1. Reducing script size

  2. Avoiding cache effect

Type of Parameters 1.Date/Time – Whenever we have to replace a date value with a parameter, Date/Time parameter is used. Any post with past date is not valid. To keep it updated, Date/Time parameter provides flexibility to get the current or future date. If past date is needed, it handles that too. 2.Group Name -We can generate a parameter on the basis of group that we select on controller for the script while execution. This parameter will only work while running the script on controller. 3. Iteration Number – This replaces the parameter with current iteration number. This is generally used to build some logic. For example- when we want some code in script to be executed alternatively. For this, we will use the iteration number to check whether it is even or odd number and for one of the condition we will execute the function. 4. Load Generator Name – We can also generate parameter while executing the script on controller on the basis of load generator name on which that script is running. This parameter only works while running the script on controller. 5. Vuser ID – When we run the script on controller, it assigns a unique id to each virtual user that emulate during the execution. This parameter type is used – To print the Vuser ID in an external file for script-debugging purpose. To segregate transaction volume based on Vuser ID 6. File – Some time we want to pass the specific value in the script. In such cases, we use file and enter the values that want to use during execution. LR provides options to run the script with provided list sequentially or randomly on next iteration. In few cases we want to use a set of values passed to the script. In such cases, we can use same file for the other parameter value as well.

7. Random Number – As per need, Vugen also generates random value from the provided range. 8.Unique value – In few situations, script is not allowed to pass any duplicate value. In such cases, unique parameter is used to avoid failures due to duplicate value,. 9.User Defined function – Such parameter calls a function whose return value replaces the parameter name. 10. XML – XML Parameter Types are used for multiple valued data contained in an XML structure. XML parameters are widely used with Web Service scripts and with SOA services.

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Controller, ExtraStuff, General, LoadRunner, Performance Testing, Vugen

Capture, Filter and Inspect Packets using Wireshark Tool

Here is the demo.. Wireshark, a network analysis tool formerly known as Ethereal, captures packets in real time and display them in human-readable format. Wireshark includes filters, color-coding and other features that let you dig deep into network traffic and inspect individual packets. This tutorial will get you up to speed with the basics of capturing packets, filtering them, and inspecting them. You can use Wireshark to inspect a suspicious program’s network traffic, analyze the traffic flow on your network, or troubleshoot network problems. Getting Wireshark You can download Wireshark for Windows or Mac OS X from its official website. If you’re using Linux or another UNIX-like system, you’ll probably find Wireshark in its package repositories. For example, if you’re using Ubuntu, you’ll find Wireshark in the Ubuntu Software Center. Just a quick warning: Many organizations don’t allow Wireshark and similar tools on their networks. Don’t use this tool at work unless you have permission.

Capturing Packets: After downloading and installing Wireshark, you can launch it and click the name of an interface under Interface List to start capturing packets on that interface. For example, if you want to capture traffic on the wireless network, click your wireless interface. You can configure advanced features by clicking Capture Options, but this isn’t necessary for now. As soon as you click the interface’s name, you’ll see the packets start to appear in real time. Wireshark captures each packet sent to or from your system. If you’re capturing on a wireless interface and have promiscuous mode enabled in your capture options, you’ll also see other the other packets on the network. Click the stop capture button near the top left corner of the window when you want to stop capturing traffic.

Color Coding You’ll probably see packets highlighted in green, blue, and black. Wireshark uses colors to help you identify the types of traffic at a glance. By default, green is TCP traffic, dark blue is DNS traffic, light blue is UDP traffic, and black identifies TCP packets with problems — for example, they could have been delivered out-of-order.

Sample Captures If there’s nothing interesting on your own network to inspect, Wireshark’s wiki has you covered. The wiki contains a page of sample capture files that you can load and inspect. Opening a capture file is easy; just click Open on the main screen and browse for a file. You can also save your own captures in Wireshark and open them later.

Filtering Packets If you’re trying to inspect something specific, such as the traffic a program sends when phoning home, it helps to close down all other applications using the network so you can narrow down the traffic. Still, you’ll likely have a large amount of packets to sift through. That’s where Wireshark’s filters come in. The most basic way to apply a filter is by typing it into the filter box at the top of the window and clicking Apply (or pressing Enter). For example, type “dns” and you’ll see only DNS packets. When you start typing, Wireshark will help you autocomplete your filter. You can also click the Analyze menu and select Display Filters to create a new filter. Another interesting thing you can do is right-click a packet and select Follow TCP Stream. You’ll see the full conversation between the client and the server. Close the window and you’ll find a filter has been applied automatically — Wireshark is showing you the packets that make up the conversation.

Inspecting Packets Click a packet to select it and you can dig down to view its details. You can also create filters from here — just right-click one of the details and use the Apply as Filter submenu to create a filter based on it. Wireshark is an extremely powerful tool, and this tutorial is just scratching the surface of what you can do with it. Professionals use it to debug network protocol implementations, examine security problems and inspect network protocol internals

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Analysis, Controller, General, Performance Testing, Tips n Tricks

Wireshark

Source:https://www.wireshark.org/download.html Wireshark is the world's foremost network protocol analyzer. It lets you see what's happening on your network at a microscopic level. It is the de facto (and often de jure) standard across many industries and educational institutions. Wireshark development thrives thanks to the contributions of networking experts across the globe. It is the continuation of a project that started in 1998.

Features:

  1. Deep inspection of hundreds of protocols, with more being added all the time

  2. Live capture and offline analysis

  3. Standard three-pane packet browser

  4. Multi-platform: Runs on Windows, Linux, OS X, Solaris, FreeBSD, NetBSD, and many others

  5. Captured network data can be browsed via a GUI, or via the TTY-mode TShark utility

  6. The most powerful display filters in the industry

  7. Rich VoIP analysis

  8. Read/write many different capture file formats: tcpdump (libpcap), Pcap NG, Catapult DCT2000, Cisco Secure IDS iplog, Microsoft Network Monitor, Network General Sniffer® (compressed and uncompressed), Sniffer® Pro, and NetXray®, Network Instruments Observer, NetScreen snoop, Novell LANalyzer, RADCOM WAN/LAN Analyzer, Shomiti/Finisar Surveyor, Tektronix K12xx, Visual Networks Visual UpTime, WildPackets EtherPeek/TokenPeek/AiroPeek, and many others

  9. Capture files compressed with gzip can be decompressed on the fly

  10. Live data can be read from Ethernet, IEEE 802.11, PPP/HDLC, ATM, Bluetooth, USB, Token Ring, Frame Relay, FDDI, and others (depending on your platform)

  11. Decryption support for many protocols, including IPsec, ISAKMP, Kerberos, SNMPv3, SSL/TLS, WEP, and WPA/WPA2

  12. Coloring rules can be applied to the packet list for quick, intuitive analysis

  13. Output can be exported to XML, PostScript®, CSV, or plain text

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General

mongo DB

MongoDB is a document database that provides high performance, high availability, and easy scalability.

Document Database

  • Documents (objects) map nicely to programming language data types.

  • Embedded documents and arrays reduce need for joins.

  • Dynamic schema makes polymorphism easier.

High Performance

  • Embedding makes reads and writes fast.

  • Indexes can include keys from embedded documents and arrays.

  • Optional streaming writes (no acknowledgments).

High Availability

  • Replicated servers with automatic master failover.

Easy Scalability

  • Automatic sharding distributes collection data across machines.

  • Eventually-consistent reads can be distributed over replicated servers.

Advanced Operations

  • With MongoDB Management Service (MMS) MongoDB supports a complete backup solution and full deployment monitoring.

MongoDB Data Model A MongoDB deployment hosts a number of databases. A manual:database holds a set of collections. Amanual:collection holds a set of documents. A manual:document is a set of key-value pairs. Documents have dynamic schema. Dynamic schema means that documents in the same collection do not need to have the same set of fields or structure, and common fields in a collection’s documents may hold different types of data. MongoDB Queries Queries in MongoDB provides a set of operators to define how the find() method selects documents from a collection based on a query specification document that uses a combination of exact equality matches and conditionals using a query operator. Deployment Architectures Although MongoDB supports a “standalone” or single-instance operation, production MongoDB deployments are distributed by default. Replica sets provide high performance replication with automated failover, while sharded clusters make it possible to partition large data sets over many machines transparently to the users. MongoDB users combine replica sets and sharded clusters to provide high levels redundancy for large data sets transparently for applications. MongoDB Design Philosophy MongoDB wasn’t designed in a lab. We built MongoDB from our own experiences building large scale, high availability, robust systems. We didn’t start from scratch, we really tried to figure out what was broken, and tackle that. So the way I think about MongoDB is that if you take MySql, and change the data model from relational to document based, you get a lot of great features: embedded docs for speed, manageability, agile development with schema-less databases, easier horizontal scalability because joins aren’t as important. There are lots of things that work great in relational databases: indexes, dynamic queries and updates to name a few, and we haven’t changed much there. For example, the way you design your indexes in MongoDB should be exactly the way you do it in MySql or Oracle, you just have the option of indexing an embedded field. —Eliot Horowitz, MongoDB CTO and Co-founder

  • New database technologies are needed to facilitate horizontal scaling of the data layer, easier development, and the ability to store order(s) of magnitude more data than was used in the past.

  • A non-relational approach is the best path to database solutions which scale horizontally to many machines.

  • It is unacceptable if these new technologies make writing applications harder. Writing code should be faster, easier, andmore agile.

  • The document data model (JSON/BSON) is easy to code to, easy to manage(dynamic schema), and yields excellent performance by grouping relevant data together internally.

  • It is important to keep deep functionality to keep programming fast and simple. While some things must be left out, keep as much as possible – for example secondaries indexes, unique key constraints, atomic operations, multi-document updates.

  • Database technology should run anywhere, being available both for running on your own servers or VMs, and also as a cloud pay-for-what-you-use service.

Key MongoDB Features MongoDB focuses on flexibility, power, speed, and ease of use:

Flexibility MongoDB stores data in JSON documents (which we serialize to BSON). JSON provides a rich data model that seamlessly maps to native programming language types, and the dynamic schema makes it easier to evolve your data model than with a system with enforced schemas such as a RDBMS.

Power MongoDB provides a lot of the features of a traditional RDBMS such as secondary indexes, dynamic queries, sorting, rich updates, upserts (update if document exists, insert if it doesn’t), and easy aggregation. This gives you the breadth of functionality that you are used to from an RDBMS, with the flexibility and scaling capability that the non-relational model allows.

Speed/Scaling By keeping related data together in documents, queries can be much faster than in a relational database where related data is separated into multiple tables and then needs to be joined later. MongoDB also makes it easy to scale out your database. Autosharding allows you to scale your cluster linearly by adding more machines. It is possible to increase capacity without any downtime, which is very important on the web when load can increase suddenly and bringing down the website for extended maintenance can cost your business large amounts of revenue.

Ease of use MongoDB works hard to be very easy to install, configure, maintain, and use. To this end, MongoDB provides few configuration options, and instead tries to automatically do the “right thing” whenever possible. This means that MongoDB works right out of the box, and you can dive right into developing your application, instead of spending a lot of time fine-tuning obscure database configurations. Operations MongoDB is a server process that runs on Linux, Windows and OS X. It can be run both as a 32 or 64-bit application. We recommend running in 64-bit mode, since MongoDB is limited to a total data size of about 2GB for all databases in 32-bit mode. The MongoDB process listens on port 27017 by default (note that this can be set at start time - please see mongod options for more information). Clients connect to the MongoDB process, optionally authenticate themselves if security is turned on, and perform a sequence of actions, such as inserts, queries and updates. MongoDB stores its data in files (default location is /data/db/), and uses memory mapped files for data management for efficiency. MongoDB can also be configured for data replication. Additionally the MongoDB Management Service (MMS) application for managing MongoDB clusters using a simple user interface. MMS provides backup and monitoring. MMS is available to all users in the cloud and on-premises as part of MongoDB Standard and Enterprise Subscriptions.Agile and Scalable MongoDB (from "humongous") is an open-source document database, and the leading NoSQL database. Written in C++, MongoDB features: Document-Oriented Storage » JSON-style documents with dynamic schemas offer simplicity and power. Full Index Support » Index on any attribute, just like you're used to. Replication & High Availability » Mirror across LANs and WANs for scale and peace of mind. Auto-Sharding » Scale horizontally without compromising functionality. Querying » Rich, document-based queries. Fast In-Place Updates » Atomic modifiers for contention-free performance. Map/Reduce » Flexible aggregation and data processing. GridFS » Store files of any size without complicating your stack. MongoDB Management Service » Manage MongoDB on the cloud infrastructure of your choice. MongoDB Enterprise » The best way to run MongoDB in production. Secured. Supported. Certified. Production Support » Our experts at your fingertips. Get access to our global support organization 24x365

source: http://www.mongodb.org/

Posted by raviteja gorentla 2 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General, Tips n Tricks

Oracle Goldengate Technology

After Oracle corp. acquiring Goldengate software there is a lot of buzz about Oracle Goldengate and it is one of the hot topics at Oracle open world 2010.Oracle Goldengate can be used as a replication tool, ETL, and even as a DR solution.

Oracle Goldengate (Golden Gate) is probably the best replication software and it is very easy to configure and deploy it in large scale environment. Here are some of the things you need to be aware of:

  • All Golden Gate configuration files are ascii text based files. Very easy to make changes but it is prone to human errors in an environment having many DBA's working on it.

  • In order to use parallel apply threads, Golden Gate breaks down the database transaction into multiple transactions based on the hashing key defined for range split of the data. So, transactional consistency will not be guaranteed during real time but there won't be any data loss, but make sure that your application can tolerate this.

  • If there is no primary key or unique index exists on any table, Golden Gate will use all the columns as supplemental logging key pair for both extracts and replicats. But if you define key columns in the Golden Gate extract parameter file and if you don't have the supplemental logging enabled on that key columns combination, then Golden Gate will assume missing key columns record data as "NULL", which is a huge deal, and this will introduce logical data corruption on the target.

  • Golden Gate started supporting bulk data loads with their 11.1 release but any NOLOGolden GateING data changes will be silently ignored without any warning.

  • Golden Gate doesn't support compression on the source database.

  • Golden Gate does support DDL replication but it is not easy to do selective DDL replication, it replicates every DDL that happens on the source database which is not desirable for some customers.

  • Tables being replicated to on the target can also be written to by any other application or DBA's.

  • Golden Gate supports ignoring data conflicts for updates after the first instantiation of the target database until it catches up. But it is very easy to forget turning off that parameter and any updates being lost will not be alerted by Golden Gate.

  • Golden Gate still works by reverse engineering the Oracle redolog. This may not be totally true with Golden Gate 11, but I expect Golden Gate to interpret Oracle redo more directly in later versions of 11 or 12.

  • Golden Gate dynamically decides to change the key columns that form the supplemental logging based on the state of primary key (i.e. in VALIDATED or NONVALIDATED state), which can introduce data corruptions on the target databases as the expected key columns data is missing in the trail files and they will be set to NULL. They now have the patch available for this, you can set "_USEALLKEYCOLUMNS and ALLOWNONVALIDATEDKEYS" parameters in GLOBALS file to get around this problem.

Posted by raviteja gorentla 2 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General, Tips n Tricks

Base64 Encode/Decode for LoadRunner

Code:

#include "base64.h"

vuser_init()

{

int res;

// ENCODE

lr_save_string("ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789","plain");

b64_encode_string( lr_eval_string("{plain}"), "b64str" );

lr_output_message("Encoded: %s", lr_eval_string("{b64str}") );

// DECODE

b64_decode_string( lr_eval_string("{b64str}"), "plain2" );

lr_output_message("Decoded: %s", lr_eval_string("{plain2}") );

// Verify decoded matches original plain text

res = strcmp( lr_eval_string("{plain}"), lr_eval_string("{plain2}") );

if (res==0) lr_output_message("Decoded matches original plain text");

return 0;

}

base64.h include file

/*

Base 64 Encode and Decode functions for LoadRunner

==================================================

This include file provides functions to Encode and Decode

LoadRunner variables. It's based on source codes found on the

internet and has been modified to work in LoadRunner.

Created by Kim Sandell / Celarius - www.celarius.com

*/

// Encoding lookup table

char base64encode_lut[] = {

'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q',

'R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f','g','h',

'i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y',

'z','0','1','2','3','4','5','6','7','8','9','+','/','='};

// Decode lookup table

char base64decode_lut[] = {

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0, 0, 0,62, 0, 0, 0,63,52,53,54,55,56,57,58,59,60,61, 0, 0,

0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14,

15,16,17,18,19,20,21,22,23,24,25, 0, 0, 0, 0, 0, 0,26,27,28,

29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,

49,50,51, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, };

void base64encode(char *src, char *dest, int len)

// Encodes a buffer to base64

{

int i=0, slen=strlen(src);

for(i=0;i

{ // Enc next 4 characters

*(dest++)=base64encode_lut[(*src&0xFC)>>0x2];

*(dest++)=base64encode_lut[(*src&0x3)<<0x4 amp="" src="" xf0="">>0x4];

*(dest++)=((i+1)>0x6]:'=';

*(dest++)=((i+2)

}

*dest='\0'; // Append terminator

}

void base64decode(char *src, char *dest, int len)

// Encodes a buffer to base64

{

int i=0, slen=strlen(src);

for(i=0;i

{ // Store next 4 chars in vars for faster access

char c1=base64decode_lut[*src], c2=base64decode_lut[*(src+1)], c3=base64decode_lut[*(src+2)], c4=base64decode_lut[*(src+3)];

// Decode to 3 chars

*(dest++)=(c1&0x3F)<<0x2 amp="" c2="" x30="">>0x4;

*(dest++)=(c3!=64)?((c2&0xF)<<0x4 amp="" c3="" x3c="">>0x2):'\0';

*(dest++)=(c4!=64)?((c3&0x3)<<0x6 amp="" c4="" div="" x3f="">

}

*dest='\0'; // Append terminator

}

int b64_encode_string( char *source, char *lrvar )

// ----------------------------------------------------------------------------

// Encodes a string to base64 format

//

// Parameters:

// source Pointer to source string to encode

// lrvar LR variable where base64 encoded string is stored

//

// Example:

//

// b64_encode_string( "Encode Me!", "b64" )

// ----------------------------------------------------------------------------

{

int dest_size;

int res;

char *dest;

// Allocate dest buffer

dest_size = 1 + ((strlen(source)+2)/3*4);

dest = (char *)malloc(dest_size);

memset(dest,0,dest_size);

// Encode & Save

base64encode(source, dest, dest_size);

lr_save_string( dest, lrvar );

// Free dest buffer

res = strlen(dest);

free(dest);

// Return length of dest string

return res;

}

int b64_decode_string( char *source, char *lrvar )

// ----------------------------------------------------------------------------

// Decodes a base64 string to plaintext

//

// Parameters:

// source Pointer to source base64 encoded string

// lrvar LR variable where decoded string is stored

//

// Example:

//

// b64_decode_string( lr_eval_string("{b64}"), "Plain" )

// ----------------------------------------------------------------------------

{

int dest_size;

int res;

char *dest;

// Allocate dest buffer

dest_size = strlen(source);

dest = (char *)malloc(dest_size);

memset(dest,0,dest_size);

// Encode & Save

base64decode(source, dest, dest_size);

lr_save_string( dest, lrvar );

// Free dest buffer

res = strlen(dest);

free(dest);

// Return length of dest string

return res;

Posted by raviteja gorentla 1 comment:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Controller, General, LoadRunner, Performance Testing, PerformanceCenter, Scripting, Tips n Tricks, Vugen

lr_paramarr_random function in load runner

In performance testing, it is really important to simulate a realistic user path through an application. For example, randomly select an image link from a gallery or select a share from a share list. In such situations, you can use the LoadRunner lr_paramarr_randomfunction to select a random value from a captured parameter array. Similarly, you can also write a code to do the same. Before you use the above function, you will need to use web_reg_save_paramfunction to capture all the ordinal values. This can be achieved by passing "ORD=ALL" into the function. The following code demonstrates the use of lr_paramarr_random function. The code saves link Ids using web_reg_save_param function and then uses

Example: This example shows how to get a random value from a parameter array. char * FlightVal; web_reg_save_param("outFlightVal", "LB=outboundFlight value=", "RB=>", "ORD=ALL", "SaveLen=18", LAST ); web_submit_form("reservations.pl", "Snapshot=t4.inf", ITEMDATA, "Name=depart", "Value=London", ENDITEM, "Name=departDate", "Value=11/20/2003", ENDITEM, "Name=arrive", "Value=New York", ENDITEM, "Name=returnDate", "Value=11/21/2003", ENDITEM, "Name=numPassengers", "Value=1", ENDITEM, "Name=roundtrip", "Value=", ENDITEM, "Name=seatPref", "Value=None", ENDITEM, "Name=seatType", "Value=Coach", ENDITEM, "Name=findFlights.x", "Value=83", ENDITEM, "Name=findFlights.y", "Value=16", ENDITEM, LAST ); /* The result of the web_reg_save_param having been called before the web_submit_form is: Notify: Saving Parameter "outFlightVal_1 = 230;378;11/20/2003" Notify: Saving Parameter "outFlightVal_2 = 231;337;11/20/2003" Notify: Saving Parameter "outFlightVal_3 = 232;357;11/20/2003" Notify: Saving Parameter "outFlightVal_4 = 233;309;11/20/2003" Notify: Saving Parameter "outFlightVal_count = 4" */ FlightVal = lr_paramarr_random("outFlightVal");

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: Errors in LoadRunner, ExtraStuff, General, LoadRunner, Performance Testing, PerformanceCenter, Tips n Tricks

SCOM (System Center Operations Manager) Monitoring tool

System Center Operations Manager 2012 – the complete application monitoring solution For many years Operations Manager has delivered infrastructure monitoring, providing a strong foundation on which we can build to deliver application performance monitoring. It is important to understand that in order to provide the application level performance monitoring, we must first have a solid infrastructure monitoring solution in place. After all, if an application is having a performance issue, we must first establish if the issue is due to an underlying platform problem, or within the application itself. A key value that Operations Manager 2012 delivers is a solution that uses the same tools to monitor with visibility across infrastructure AND applications. To deliver application performance monitoring, we provide 4 key capabilities in Operations Manager 2012: Infrastructure monitoring – network, hardware and operating system Server-side application monitoring – monitoring the actual code that is executed and delivered by the application Client-side application monitoring – end-user experiences related to page load times, server and network latency, and client-side scripting exceptions Synthetic transaction – pre-recorded testing paths through the application that highlight availability, response times, and unexpected responses Configuring application performance monitoring So it must be hard to configure all this right? Lots of things to know, application domain knowledge, settings, configurations? Rest assured, this is not the case! We make it incredibly easy to enable application performance monitoring! 1. Define the application to monitor. 2. Configure server-side monitoring to be enabled and set your performance thresholds 3. Configure client-side monitoring to be enabled and set your performance thresholds And that’s it, you’re now set to go. Of course setting the threshold levels is the most important part of this, and that is the one thing we can’t do for you… you know your application and what the acceptable performance level is. Configuring an application performance dashboard in 4 steps It’s great that we make the configuration of application performance monitoring so easy, but making that information available in a concise, impactful manner is just as important. We have worked hard to make the creation of dashboards incredibly easy, with a wizard driven experience. You can create an application level dashboard in just 4 steps: 1. Choose where to store the dashboard 2. Choose your layout structure. There are many different layouts available. 3. Specify which information you want to be part of your dashboard. 4. Choose who has access to the dashboard. As you will see a little later in this article, publishing information through web and SharePoint portals is very easy. And just like that, you’ve created and published an application performance monitoring dashboard! Anyone who has either worked in IT, or been the owner of an application knows the conversations and finger pointing that can go on when users complain about poor performance. Is it the hardware, the platform, a code issue or a network problem? This is where the complete solution from Operations Manager 2012 really provides an incredible solution. It’s great that an application and associated resources are highly available, but availability does not equal performance. Indeed, an application can be highly available (the ‘5 nines’) but performing below required performance thresholds. The diagram below shows an application dashboard that I created using the 4 steps above for a sample application. You can see that the application is available and ‘green’ across the board. But the end users are having performance issues. This is highlighted by the client side alerts about performance. Deep Insight into application performance Once you know that there is an issue, Operations Manager 2012 provides the ability to drill into the alert down to the code level to see exactly what is going on and where the issue is. Reporting and trending analysis An important aspect of application performance monitoring is to be able to see how your applications are performing over time, and to be able to quickly gain visibility into common issues and problematic components of the application. In the report shown below, you can see that we can quickly see areas of the application we need to focus on, and also understand how these components are related to other parts of the application, and may be causing flow-on effects. Easily make information available With Operations Manager 2012, we have made it very easy to delegate and publish information across multiple content access solutions. Operations staff have access to the Operations Manager console, and we can now easily publish delegated information to the Silverlight based Operations web console and also to SharePoint webparts.

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: LoadRunner, Monitoring, Performance Testing, PerformanceCenter, Scripting

Bugzilla-A Bug tracking tool

What is Bugzilla? Bugzilla is a bug tracking system developed at mozilla.org. How do enter a bug in Bugzilla? To enter a bug, through "Enter a new bug" link from the main Bugzilla page. This will take you to a product selection screen. What happens once enter a bug? After you enter a bug, mail is sent both to you and the QA department. A member of the QA department will verify that they can reproduce your bug. How do search a bug? To search a bug, through "Query" link from the main Bugzilla page. How do submit a patch? The new Bugzilla system supports the attachment of patches, test cases, and various other forms of file types directly from the bug report screen. Just click on "Create an attachment" Are cookies required in Bugzilla? Yes.

How can you view your assigned bugs? We can view the assigned bugs though “My Bugs” link. How can you generate bug report? We can generate bug report through “Report” link. When Bugzilla was released? Bugzilla was released in 1998. Which language was written Bugzilla first time? Bugzilla was originally written in "TCL". Who developed the Bugzilla? Terry Weissman Which language Bugzilla written? "Perl" What are the Bugzilla fields? Bugzilla have 11 fields like-

  1. Product

  2. Component

  3. Version

  4. Platform

  5. OS

  6. Priority

  7. Severity

  8. Assigned To

  9. URL

  10. Summary

  11. Description

How can you edit your account in Bugzilla? We can use "User Preferences" link. How can you add new Product in Bugzilla? We can use "Product" link for adding new product. How can you add Components of product in Bugzilla? We can use "Components" link for adding components in Bugzilla. How can edit version of any product in Bugzilla? We can use "Edit Versions" link.

Posted by raviteja gorentla No comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: ExtraStuff, General, Manual Testing, Tips n Tricks

Perfmon to capture the Process Performance of server or system

How do we track and log system and process information on a Windows operating system? Windows system tools available for monitoring almost every type of performance, including CPU, memory, file system and network usage. You do not have to rely merely on the Task Manager. There is a Windows utility called PerfMon (Performance Monitor) that has the ability to graph and log performance metrics for specific processes, as well as set alarms and timing for performance monitoring and logging. Steps to View and Log Performance Data Using Windows 7 Performance Monitor: 1.Make sure LabVIEW is open.

2.Click on the Start Menu and click Run.

3.Type perfmon into the Run command prompt and click OK. 4.You will then see the Performance Monitor pop up. 5.Click on the green ‘+’ sign near the top of the Performance Monitor Window to bring up theAdd Counters window.

6.On the left side,choose which counters to add and click Add >>. Your counters should now appear under Added Counters. Click OK. There are many options for what counters to add. A few that may be of special note when dealing with LabVIEW performance issues are:

Memory Processor

Process (shown above): Under process, one can choose specific programs to monitor. In the upper left Window, you can select the aspects of the process that you’d like to monitor such as % Processor Time or Virtual Bytes. In the lower left hand window, you can select which process you’d like to monitor. Above, LabVIEW has been selected.

Now you should be able to see all chosen counters updating on the graph

In order to log this data to file, right-click on Performance Monitor and select New»Data Collector Set.

Type in a name for your data set and press Next.

Complete the rest of the steps including choosing the location you want to save your log file. When you want to start logging the performance data, right-click on your Data Collector Set and select Start. To stop logging, right-click and select Stop. Note: PerfMon provides ActiveX properties and methods, allowing you to control it through another ADE and even use it as an embedded control in an applicati


bottom of page