veritas backup exec support number  - Crack Key For U

Veritas Backup Exec is a data protection software product designed for customers who have mixed physical and virtual environments, and who are moving to. Product(s): Backup Exec, Access, Appliances, ApplicationHA, the "Redeem Voucher & Register Serial Number" icon in the Entitlements tab. When you contact Technical Support, please have the following information Do not type a serial number or import a license file. Go.

Veritas backup exec support number - Crack Key For U -

Chat is available in English only.

Chat is currently not available. Please check back on Monday when our supported chat hours begin.

Please select from the list below to check availability.

Only chat-supported case types are listed.

Hours of operation: 24 hours per day, depending on agent availability.

Only chat-supported cases are listed.

Hours of operation: Monday-Friday: 24 hours per day.

Only chat-supported request types are listed.

Only chat-supported products are listed.

Veritas Backup Exec Technical Support is extending Chat service hours to 24 x 5 (Monday to Friday) providing enhanced flexibility for our Customers. For faster resolution, all low severity issues will be handled in chat, where business operations have not been adversely affected, or for enhancement requests. For responsive support, start a Chat now.

Only chat-supported features are listed.

Only chat-supported platforms are listed.

Источник: https://www.veritas.com/support/en_US/downloads
Symantec Backup Exec". backup-exec.helpmax.net. Retrieved 2016-02-25.
  • ^"Backup Exec 20.4 Administrator's Guide". www.veritas.com. Retrieved 2019-05-29.
  • ^"How Backup Exec works

    Veritas Backup Exec 21.2.1200.1930 Crack Full Download

    Veritas Backup Exec is an extremely comprehensive software program. Users can save window files with attractive security. Known throughout the world, it is available in a unique interface.  Download the lone reinforcement from the Veritas Backup exec. A great business with a lot of versatility. It is not only reinforcing and recovering information. Every application, worker, and workstation is expected to have it. It is not possible for them to work permanently if you consider their serial skills, which are very important for their working course. The software is very easy to use, and the interface is extremely beautiful.

    Veritas Backup Exec 21.2.1200.1930 Crack Full Version 2021 Free

    Its functionality in Android and on PC is very impressive.  Backups are simply the work of Veritas Backup Exec Crack. Business enterprise that is versatile, adaptable, and reliable. Backup and retrieval of data are only two aspects of data protection. With Veritas Backup Exec, you can navigate to the cloud, which makes it possible to secure Windows Server 2016 and Hyper-V 2016 instantly and save time, money, and tools across your entire infrastructure. In this way, you can usually dedicate the majority of your time to your own business.

    In either a physical or virtual environment, Veritas Backup Exec Crack (formerly Symantec Backup Exec) is capable of managing your data whether you’re running on a Windows, Mac, or Linux server, or using VMware or Hyper-V. Its new interface reveals a new workflow with improvements from its previous edition, Veritas Backup Exec Crack. Several configurations are available for Backup Exec to accommodate networks of all sizes. In addition, it has tools for scaling Backup Exec environments, as well as extending the application’s platform support and features.

    Key Features Of Veritas Backup Exec Cracked:

    • We’re not all going the same way, but 10 percent of data will move to cloud-based systems within the next three years.
    • With native cloud connectors for AWS, Microsoft Azure, IBM, and Google, you can accelerate your cloud journey.
    • Raising awareness of ransomware threats and reducing vulnerability.
    • Azure and Microsoft Server 2021 are certified backup solutions for Backup Exec.
    • Subscription licensing lets you stay current with latest technology releases.
    • It is designed to manage servers, applications, and workstations across a network.
    • Its latest edition introduces a new user interface and workflow changes over its predecessor. Several configurations are available for Backup Exec to accommodate networks of all sizes. Aside from scaling the Backup Exec environment, it also extends support for platforms and features.
    • Most organizations today rely on a patchwork of technology infrastructures because there are more options than ever before for storing and protecting data.
    • Originally developed by Symantec, Veritas Backup Exec (formerly Symantec Backup Exec) is an advanced, all-in-one data management tool for Windows and Mac environments.
    • Maintain data protection compliance using automated workflows.
    •  Software that offers fast and reliable backup and restores features backed by a powerful, high-performance data management solution.
    • Integration of Smart Meters increases storage visibility.
    • Data and applications still need to be backed up on physical systems, but that will change.

    Screenshots:

    Veritas Backup Exec 21.2.1200.1930 Crack Full Version 2021 Free

    Veritas Backup Exec 21.2.1200.1930 Crack Full Version 2021 Free

    What’s new in Backup Exec 21:

    • Workflow and efficiency are improved by the new streamlined interface
    • VMware and Hyper-V backup and instant recovery
    • Hyper-V 2016 and Windows Server 2016 now have new protection
    • Various other bugs and improvements have been made.

    License Key

    • TYUI8B2-VBYUI8-VBTU9-VBNT7-CVRY9
    • ZSER2-VDFGH6-BNMK8-KLGH3-ASDF8ER
    • SFGazxvv-GADZV-EGADZ-AGDVX-AGDDG
    • ERYTDHF-SRGF-ARSFH-AGDSC-AGDSHXC

    Serial Key

    • IUYTR-JHGFD-HGFD-MNBVC-NVCXZ
    • HGFD-HGFD-JHGGD-IUYTR-OIUYTCM
    • ERFG2-VDEAGDS-BNEAGS-KLAE-ASEGDE
    • EWTDD-WRYSHDF-RSHF-RSHYF-RYSHF

    Activation key:

    • 5y15JmkZbVI-WZb3K8XtoYDj-ubn4VaoBg
    • PC5zZ4pteaC-T98jFO-22oovmCHlUV61rZ
    • aOeSDH4-PiOhW5nH5kiO-AE5PjJGYo693t
    • 3cV7yJWiLDjsZn-7lVhMJq-WLwS6ABQaN

    System  Requirements:

    • Windows 8.1/ 10 (client) and Server 2003/ 2008/ 2012/ 2016 (32-bit or 64-bit)
    • 2 GHz multi-core processor
    • 1 GB RAM
    • 1.9 GB disk space

    How to Install & Crack?

    • First, you to download the software from the given button at the end.
    • Uninstall the Previous version (if you are using any) with IObit Uninstaller Pro.
    • Extract the RAR or zip file.
    • Now run the setup and close it from everywhere.
    • Now Open the “Crack” or “Patch” file (given), copy and paste it into the installation directory, and run.
    • Or use the keys given to register the Program.
    • All done!😉
    Источник: https://cracked4pc.com/veritas-backup-exec-crack/

    Backup Exec

    Backup and recovery software from Veritas Software

    BELogo-64.png
    Original author(s)Maynard Electronics
    Developer(s)Veritas Technologies LLC
    Initial release1980s
    Stable release

    21.3 / Sep 06, 2021

    Written inC, C++, C#, .Net, Python
    Operating systemWindows 2019, Windows 2016, Windows 2012 R2, Windows 2012, Windows 2008 R2, Windows 2008, Windows 2003 R2, Windows 10, Windows 8.1, Windows 7, Red Hat Enterprise Linux, SUSE Linux Enterprise Server
    PlatformWindows Server, Linux, VMware vSphere, Microsoft Hyper-V
    Size2.4 GB
    Available inEnglish, French, German, Italian, Japanese, Korean, Portuguese, Russian, Simplified Chinese, Spanish, Traditional Chinese
    LicenseProprietary commercial software
    Websitehttp://www.backupexec.com

    Veritas Backup Exec is a data protection software product designed for customers who have mixed physical and virtual environments, and who are moving to public cloud services. Supported platforms include VMware and Hyper-V virtualization, Windows and Linux operating systems, Amazon S3, Microsoft Azure and Google cloud storage, among others. All management and configuration operations are performed with a single user interface. Backup Exec also provides integrated deduplication, replication, and disaster recovery[1] capabilities and helps to manage multiple backup servers or multi-drive tape loaders.

    Backup Exec has an installation process that is well automated.[2] Installing Backup Exec 15 on a Windows Server 2012 R2 system takes around 30 minutes.[3] The installation wizard can be started from the Backup Exec Installation Media or the management console to push agents out to the physical servers, Hyper-V/VMware virtual machines, application/database systems hosting Active Directory, Exchange, Oracle database, SQL, and other supported platforms.

    With its client/server design, Backup Exec provides backup and restore capabilities for servers, applications and workstations across the network. Backup Exec recovers data, applications, databases, or systems, from an individual file, mailbox item, table object, to an entire server. Current versions of the software support Microsoft, VMware, and Linux, among a longer list of supported hardware and software.[4]

    When used with tape drives, Backup Exec uses the Microsoft Tape Format (MTF),[5] which is also used by Windows NTBackup, backup utilities included in Microsoft SQL Server, and many other backup vendors and is compatible with BKF. Microsoft Tape Format (MTF)[5] was originally Maynard's (Backup Exec's first authors) proprietary backup Tape Format (MTF) and was later licensed by Microsoft as Windows standard tape format. In addition, Microsoft also licensed and incorporated Backup Exec's backup engine into Windows NT, the server version of Windows.[6]

    In addition, Backup Exec's family of agents and options offer features for scaling the Backup Exec environment and extending platform and feature support. Backup Exec 21.3 is the latest version of Veritas’ backup and recovery software, released on September 6, 2021.[7]

    History[edit]

    Within the “backup” portion of the data protection spectrum, one Veritas product, Backup Exec, has been in the market for more than two decades. Since the early days of Microsoft’s journey to turn its Windows Server into the world’s dominant client-server operating system, Backup Exec has been one of a handful of technologies to protect it. As the WinSvr OS grew to become a platform of choice for application enablement and user productivity, Backup Exec’s media/platform support, application support, and internal operation evolved at a similar pace.[8]

    Backup Exec has a long history of successive owner-companies. Its earliest roots stretch back to the early 1980s when Maynard Electronics wrote a bundle of software drivers to help sell their tape-drive products.

    • 1982 - Maynard Electronics started. Maynard's software is known as "MaynStream."
    • 1989 - Maynard is acquired by Archive Corp. MaynStream is available for DOS, Windows, Macintosh, OS/2, and NetWare.
    • 1991 - Quest Development Corporation is independently formed to develop backup software under contract for Symantec.
    • 1993 - Conner Peripherals acquires Archive Corp. and renames the software "Backup Exec".
    • 1993 - Quest acquired rights to FastBack for Macintosh, and hired its principal author, Tom Chappell, from Fifth Generation Systems.
    • One of the first iteration of Backup Exec - Maynard's Maynstream
      1994 - Conner creates a subsidiary, Arcada Software, by acquiring Quest and merging it with their existing software division.
    • 1995 - Arcada acquires the SyTron division from Rexon, including their OS/2 backup software.
    • 1996 - Conner is acquired by Seagate Technology and Arcada is merged into its subsidiary Seagate Software.
    • 1999 - VERITAS Software acquires Seagate Software's Network and Storage Management Group, which included Backup Exec.
    • 2005 - Symantec acquires VERITAS, including Backup Exec.[9]
    • 2015 - Symantec announced they would be splitting off the Information Management Business which contains Backup Exec, into a new company named Veritas Technologies Corporation acquired by the Carlyle Group.[10]
    • 2016 - Veritas Technologies re-launches as a newly independent company which contains Backup Exec.[11]

    Architecture[edit]

    Core components[edit]

    The core components that are contained in a basic Backup Exec architecture include the following:

    • A Backup Exec server is the heart of a Backup Exec installation. The Backup Exec server is a Windows server that:
      • Runs the Backup Exec software and services that control backup and restore operations
      • Is attached to and controls storage hardware
      • Maintains the Backup Exec database, media catalogs, and device catalogs
    • The Backup Exec Administration Console is the interface to control a Backup Exec server.
      • The Administration Console can be run directly on a Backup Exec server or from a remote system (using a Backup Exec Remote Administration Console).
    • Storage devices attached to the Backup Exec server contain the media on which backup data is written.
      • Backup Exec supports many different types of devices and media, including cloud, disk-based and tape-based. Backup Exec supports unlimited number of clients, NDMP-NAS systems, tape drives, and tape libraries.
    • Clients are the systems that contain the data which the Backup Exec server backs up.
      • Clients can include database servers, application servers, file servers, and individual workstations.
        An illustration of the Backup Exec platform

    Add-on components[edit]

    Backup Exec Agents and Options expand the features and functionality of core Backup Exec server to support the most common server applications, including Microsoft Exchange, SharePoint and SQL Server, Oracle, Windows and Linux clients, server OSs, and the Hyper-V and VMware hypervisors.[2] Not all agents are agents in the traditional sense. For example, the Agent for VMware and Hyper-V is not carrying out the backup process. The agent is simply collecting meta data (takes a few seconds) so that Backup Exec can perform granular recoveries directly from storage at a point in the future - no mounting required.

    Here is a list of Backup Exec Agents and Options:[12]

    Agents Options
    Agent for VMware and Hyper-V Deduplication Option
    Agent for Applications and Databases Enterprise Server Option
    Agent for Windows NDMP Option
    Agent for Mac (no longer supported BE16 [13]) Library Expansion Option
    Agent for Linux Virtual Tape Library (VTL) Unlimited Drive Option
    Remote Media Agent for Linux (RMAL)

    Installation[edit]

    Backup Exec and its options can be installed on a local computer, a remote computer, within a virtual environment, or on a public cloud "Infrastructure as a Service (IaaS)" virtualization platform.[14] Today Backup Exec supports the Backup Exec server installation on 64-bit operating systems only. However, the Agent for Windows can be installed on 32-bit operating systems. Several methods are available for installing Backup Exec.[15] An Environment Check runs automatically during installation to make sure that the installation process can complete. If Backup Exec finds any configuration issues that can be fixed during the installation, or that may prevent Installation, warnings appear.[15]

    Backup Exec can be installed using the following:[16]

    • Installation wizard from the Backup Exec installation media, which guides through the installation process.
    • Push-install Backup Exec to remote computers through Terminal Services and the installation media is on a shared drive (network share).[17]
    • Command line, which is called silent mode installation. The silent mode installation uses the Setup.exe program on the Backup Exec installation media.

    Additionally, Backup Exec installation media also has a Remote Administrator feature which can be installed on a remote computer or workstation to administer the Backup Exec server remotely.

    Backup Exec may install the additional products:[18]

    • Microsoft Report Viewer 2010 SP1
    • Microsoft.NET Framework 4.6
    • Microsoft Visual C++ 2008 ServicePack 1 Redistributable Package MFCSecurity Update
    • Microsoft Visual C++ 2010 ServicePack 1 Redistributable Package MFC Security Update
    • Microsoft Visual C++ 2012 Redistributable Package
    • Microsoft Visual C++ 2015 Redistributable Package
    • Microsoft SQL Server 2014 Express with SP2

    Configuration[edit]

    Backup Exec installations can have one or more Backup Exec servers, which are responsible for moving data from one or more locations to a storage medium, including cloud, disk, tape, and OST device. The data may be from the local system or from a remote system.[19] There are two primary Backup Exec architectures:

    Central Admin Server Option architecture

    1. Standalone Backup Exec configuration (Two-Tier)

    A single Backup Exec server is assigned the standalone Backup Exec server role. Each server runs the Backup Exec software and the services that control backup and restore operations of multiple clients. Each Backup Exec server maintains its own Backup Exec database, media catalogs, and device catalogs.

    2. Central Admin Server Option (CASO) configuration (Three-Tier)

    Large environments may contain multiple Backup Exec servers responsible for backing up many different client systems. Backup Exec servers in large environments can run independently of each other if each server is managed separately. Separate server management may not be an issue if there are only two or three Backup Exec servers, but it can become unwieldy as the environment grows. Backup Exec can centralize the management of multiple Backup Exec servers using an add-on option called the Backup Exec Central Admin Server Option (CASO). CASO ensures that everything throughout the network is protected by a single system that can be managed from one console[2] and also balances the workload across all Backup Exec servers in the environment.

    In a CASO environment, one Backup Exec server can be configured to be the Central Admin Server (CAS), while other Backup Execs become managed Backup Exec servers (MBESs) that are managed by the CAS. The CASO configuration simplifies the management and monitoring of enterprise-level environments.

    Features and Capabilities[edit]

    Backup Exec includes the following features and capabilities:

    • Backup Options:
    • Recovery Options:
      • Catalog-assisted granular recovery of objects, files, folders, applications, or VMs (including Exchange, SharePoint, SQL Server, and Active Directory) directly from storage, with no mounting or staging.
      • Restore to different targets or hardware (Dissimilar Hardware Recovery)
      • Restore to physical or virtual servers[20]
      • Simplified Disaster Recovery (SDR)
      • Guided Search and Restore: Built-in indexing and the ability to restore files through search.
      • True image restore
    • Cloud Support[22]
    • Virtual Server protection support[30]
      • Multi-hypervisor support (Microsoft Hyper-V, VMware vSphere, & Citrix XenServer)[31]
      • Supports Agentless backup of both Hyper-V and VMware virtual machines
      • Supports image-level, off-host backups of virtual machines
      • Support for VMware Changed Block Tracking (CBT)
      • Block Optimization Support:[32] Intelligent skipping of unused blocks within a virtual disk file
      • Integration with Microsoft Volume Shadow Copy Service and VMware’s vStorage APIs for Data Protection (VADP)[33]
      • From a single-pass backup of a virtual machine, recover:[34]
      • Fully integrated Physical-to-Virtual (P2V), which can be used for migrations or instant recovery[35]
      • Also supports - Backup to Virtual (B2V) and Point-in-time Conversion (PIT)[36]
    • Integrated Data Deduplication:[37]
      • Integrated block-level data deduplication[33]
      • Client, server-side, or OST appliance deduplication
      • Client deduplication supported for both Windows as well as Linux computers[38]
      • Optimized duplication (Opt Dup) supports backup "replication" from MMS/MBE to CAS/ CAS to MMS/MBE[39]
    • Security and Data Encryption:
      • Software encryption[40]
      • Hardware encryption (T10 encryption standard)[40]
      • Database Encryption Key (DEK)[41]
      • FIPS Version: OpenSSL FIPS 2.0.5[42]
      • Secure TLS protocol for its SSL control connection (over NDMP) between the Backup Exec Server and the Agent on a remote computer[43]
    • Management and Reporting:
      • Centralized administration:[44] Provides a single console for managing the entire Backup Exec environment, creates and delegates jobs to multiple Backup Exec servers, defines device and media sets.
      • Typical CASO configured Backup Exec environment
        Centralized reporting:[45] Monitors all job activity dispatched by the CAS in real time, provides holistic reporting for the entire storage environment, centrally defines notification and alert settings
      • Operational resiliency: Automatically load-balances jobs across multiple Backup Exec servers, provides job failover from one Backup Exec server to another, centralizes or replicates catalogs for restores.
      • Management Plug-in for VMware vSphere[46]
      • Management pack for Microsoft System Center Operations Manager 2007 R2 & 2012 R2 (SCOM)[47]
      • Management Plug-In for Kaseya[48]
      • Localization/Language packs[49]
      • Command Line Interface (BEMCLI)[50]
    • Media Management:
      • Automatic robotic/tape drive configuration
      • Broad tape device support
    • Heterogeneous Support:
      • Broad platform support
      • Bare-metal restore, supports P2V as an option.
        Backup Exec Command Line Interface - BEMCLI
      • Support for leading networking topologies
      • Advanced VSS support
      • OpenStorage (OST) support
      • IPv4 & IPv6 support[51]

    Multiplexing limitation[edit]

    Backup Exec does not have support for sending data streams from multiple parallel backup jobs to a single tape drive, which Veritas refers to as multiplexing.[52] Their NetBackup product does have this capability.[53]

    Multiplexing can reduce backup times when backing up data from non-solid state sources containing millions of small or highly fragmented files, which require very large amounts of head-seeking using traditional mechanical hard drives, and which significantly slow down the backup process.

    When only a single job is running, and the source server is constantly seeking at a high rate, the tape drive slows down or may stop, waiting for its write cache to be filled. These delays accessing data can cause the backup availability window to be exceeded, when multiple servers with slow transfer rates are being backed up one after the other to the tape device.

    A workaround to this is to install temporary disk storage in the backup server to use as a cache for the backup process. This storage is split into hundreds of small 1-5 gigabyte data blocks. Backups to the data blocks can be done in parallel, and each of the separate disk-based backup jobs are configured to duplicate and append to tape when completed.

    Licensing[edit]

    Backup Exec has the following licensing options:[54]

    • Capacity Edition - Deploy an unlimited number of Backup Exec servers, Agents and Options (Licensed per TB)
    • Capacity Edition Lite - Includes protection for Windows and Linux operating systems, VMware and Hyper-V virtual environments, Microsoft applications, Oracle, and Enterprise Vault (Licensed per TB)
    • V-Ray Edition - Protects an unlimited number of guest machines per host including all of the applications and databases (Licensed per occupied processor socket on the virtual host)[55]
    • Traditional - Licensing per Backup Exec server with Agents and Options available based on need

    Releases[edit]

    Conner Backup Exec 2.1 DOS version
    • MaynStream for Windows 3.0, May, 1992[56]
    • Conner Backup Exec 2.1 DOS Version[57]
    • Conner Backup Exec for Windows NT 3.1, May, 1993
    • Barney, Doug (June 1994). "Arcada Backup Exec will be bundled with Chicago". InfoWorld. 16 (24): 26.
    • Arcada Software Backup Exec for Windows NT 6.0, April, 1995[58]
    • Seagate Software Backup Exec for Windows NT 7.0, August, 1997[59]
    • Seagate Backup Exec 7.2, October, 1998
    • VERITAS Backup Exec 7.3, March, 1999[60]
    • VERITAS Backup Exec 8.0, January, 2000
    • VERITAS Backup Exec 8.6, November, 2001
    • VERITAS Backup Exec 9.0, January 22, 2003[61]
    • VERITAS Backup Exec 9.1, November 4, 2003[62]
    • VERITAS Backup Exec 10.0, January, 2005[63]
    • Symantec Backup Exec 10d, September, 2005[64]
    • Symantec Backup Exec 11d, November, 2006[65]
    • Symantec Backup Exec 12, February, 2008[66]
    • Symantec Backup Exec 12.5, October, 2008[67]
    • Symantec Backup Exec 2010 (13.0), February, 2010[68]
    • Symantec Backup Exec 2010 SP1, August 16, 2010[69]
    • Symantec Backup Exec 2010 Administration Console
      Symantec Backup Exec 2010 R2, August 2, 2010[70]
    • Symantec Backup Exec 2010 R2 SP1[71]
    • Symantec Backup Exec 2010 R3, May 3, 2011[72]
    • Symantec Backup Exec 2010 R3 SP1, June 12, 2012[73]
    • Symantec Backup Exec 2010 R3 SP2, February 1, 2012[74]
    • Symantec Backup Exec 2010 R3 SP3, July 26, 2013[75]
    • Symantec Backup Exec 2010 R3 SP4, January 27, 2014[71]
    • Symantec Backup Exec 2012 (14.0), March 5, 2012[76]
    • Symantec Backup Exec 2012 SP1, June 1, 2012[77]
    • Symantec Backup Exec 2012 SP2, July 26, 2013[75]
    • Symantec Backup Exec 2012 SP3, November 21, 2013[78]
    • Symantec Backup Exec 2012 SP4, March 13, 2014[79]
    • Symantec Backup Exec 2014, (14.1), June 2, 2014[80]
    • Symantec Backup Exec 2014 SP1, September 22, 2014[81]
    • Symantec Backup Exec 2014 SP2, December 15, 2014[71]
    • Symantec Backup Exec 15, (14.2 Rev 1180), April 6, 2015[82]
    • Symantec Backup Exec 15 FP1, July 8, 2015[83]
    • Symantec Backup Exec 15 FP2, October 19, 2015[84]
    • Symantec Backup Exec 15 FP3, December 9, 2015[85]
    • Veritas Backup Exec 15 FP4, April 18, 2016[86]
    • Veritas Backup Exec 15 FP5, August 1, 2016[87]
    • Veritas Backup Exec 16, November 7, 2016[88]
    • Veritas Backup Exec 16 FP1, April 4, 2017[89]
    • Veritas Backup Exec 16 FP2, July 31, 2017[90]
    • Veritas Backup Exec, November 7, 2017[91]
    • Veritas Backup Exec (20.1), April 2, 2018[92]
    • Veritas Backup Exec (20.2), August 13, 2018[93]
    • Veritas Backup Exec (20.3), October 23, 2018[94]
    • Veritas Backup Exec (20.4), May 6, 2019[95]
    • Veritas Backup Exec (20.5), Sep 02, 2019[96]
    • Veritas Backup Exec (20.6), Dec 02, 2019[97]
    • Veritas Backup Exec (21), Apr 06, 2020[98]
    • Veritas Backup Exec (21.1), Sep 06, 2020[99]
    • Veritas Backup Exec (21.2), Mar 01, 2021[100]
    • Veritas Backup Exec (21.3), Sep 06, 2021[7]

    See also[edit]

    References[edit]

    1. ^Veritas Technologies LLC (2015-07-22), Backup and Recovery InfoBit - Backup Exec 15: Simplified Disaster Recovery, retrieved 2016-02-19
    2. ^ abc"Developing a Real Backup Plan with Symantec's Backup Exec 15". EdTech. Retrieved 2016-02-23.
    3. ^"Veritas Backup Exec 15 review". IT PRO. Retrieved 2016-02-25.
    4. ^"Backup Exec Compatibility Lists (HCL and SCL)". www.veritas.com. Retrieved 2016-02-19.
    5. ^ ab"Media Sets, Media Families, and Backup Sets (SQL Server)". technet.microsoft.com. Retrieved 2016-02-26.
    6. ^Inc, IDG Network World (1992-03-02). Network World. IDG Network World Inc.
    7. ^ ab"Backup Exec 21.3 Readme". www.veritas.com. Retrieved 2021-10-28.
    8. ^"Symantec Backup Exec 2010 Deduplication And Archiving Suite - Essential Support - 1 Server". lifliycmww. Archived from the original on 2016-03-05. Retrieved 2016-02-25.
    9. ^Hacking; Security; Cybercrime; Vulnerability; Malware; hijackers, Cisco and Level 3 team up to squash brute force server; embarrassed, Cyber-crypto-criminal-cock-up Little money and (probably); easier, Bad news everyone: Cybercrime is getting even. "Symantec buys Veritas for $13.5bn stock". Retrieved 2016-02-19.
    10. ^"Symantec to split into security and storage software companies". Reuters. 2014-10-09. Retrieved 2016-02-19.
    11. ^"Newly Independent Veritas Re-Launches as Information Management Leader Veritas". www.veritas.com. 2012-11-05. Retrieved 2016-02-19.
    12. ^ ab"Veritas Backup Exec Administrator's Guide: Using encryption with Backup Exec". Veritas Support. Veritas Technologies LLC. 17 November 2017. Retrieved 30 January 2019.
    13. ^"Backup Exec Security Blogs". www.veritas.com. 2015-05-04. Retrieved 2016-02-19.
    14. ^"Whats_New_in_BackupExec_15_FP1"(PDF).
    15. ^"Backup Exec and Self Signed Certificates". www.veritas.com. 2012-05-17. Retrieved 2016-02-23.
    16. ^"White Paper: Windows® Enterprise Data Protection with Symantec Backup Exec™"(PDF).
    17. ^"White Paper: Windows® Enterprise Data Protection with Symantec Backup Exec™"(PDF).
    18. ^"Symantec Backup Exec Management Plug-in for VMware®". www.veritas.com. Archived from the original on 2016-12-20. Retrieved 2016-02-23.
    19. ^"Backup Exec 2014 Management Pack for Microsoft System Center Operations Manager (SCOM)". www.veritas.com. Retrieved 2016-02-26.
    20. ^"Symantec Backup Exec Management Plug-in for Kaseya® Download". www.veritas.com. Retrieved 2016-02-23.
    21. ^"How to modify the language Backup Exec user interface displays". www.veritas.com. Retrieved 2016-02-23.
    22. ^"Backup Exec 15 Management Command Line Interface (BEMCLI) Documentation". www.veritas.com. Retrieved 2016-02-23.
    23. ^"About using IPv4 and IPv6 in Backup Exec". www.veritas.com. Archived from the original on 2016-12-20. Retrieved 2016-02-23.
    24. ^Veritas tech note: What is the difference between multiplexing and multistreaming?, article #000004808, September 14, 2015, http://www.veritas.com/docs/000004808
    25. ^"About multiplexing". systemmanager.ru/nbadmin.en. Retrieved 22 June 2019.
    26. ^"Backup Exec 16 Licensing Guide". www.veritas.com. Retrieved 2016-02-19.
    27. ^Inc., Spiceworks. "So what exactly is in this Backup Exec V-Ray edition any way?". The Spiceworks Community. Retrieved 2016-02-19.
    28. ^Inc, InfoWorld Media Group (1993-04-26). InfoWorld. InfoWorld Media Group, Inc.
    29. ^"DANIELSAYS.COM - Daniel's Legacy Computer Collections - Screen Shot Gallery - DOS - Backup Exec 2.1(f) DOS Version". www.danielsays.com. Archived from the original on 2015-01-27. Retrieved 2016-02-19.
    30. ^Inc, Ziff Davis (1996-02-20). PC Mag. Ziff Davis, Inc.
    31. ^Software, Seagate. "Powerful New Seagate Backup Exec for Windows NT Sets the Standard for Windows NT Enterprise Storage Management". www.prnewswire.com. Retrieved 2016-02-19.
    32. ^Staff Writer (1999-06-24). "Veritas Software releases new version of Veritas Backup Exec Small Business Server Solution". ITWeb Technology News. Retrieved 2016-02-23.
    33. ^Corporation, VERITAS Software. "VERITAS Extends Data Protection Leadership With Backup Exec 9.0". www.prnewswire.com. Retrieved 2016-02-23.
    34. ^"Veritas Backup Exec 9.1 for Windows Servers Specs". CNET. Retrieved 2016-02-19.
    35. ^"VERITAS Partners Worldwide See Golden Opportunities in Backup Exec 10.0 Symantec Corporation". www.symantec.com. Retrieved 2016-02-19.
    36. ^"Symantec Backup Exec 10d is "Designed for Disk" and Delivers Continuous Data Protection

      Configuring and managing high availability clusters

      Red Hat Enterprise Linux8

      Configuring and managing the Red Hat High Availability Add-On

      Red HatCustomer Content Services

      Legal Notice

      Abstract

      This guide provides information about installing, configuring, and managing the Red Hat High Availability Add-On for Red Hat Enterprise Linux 8.


      Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

      We appreciate your input on our documentation. Please let us know how we could make it better. To do so:

      • For simple comments on specific passages:

        1. Make sure you are viewing the documentation in the Multi-page HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document.
        2. Use your mouse cursor to highlight the part of text that you want to comment on.
        3. Click the Add Feedback pop-up that appears below the highlighted text.
        4. Follow the displayed instructions.
      • For submitting more complex feedback, create a Bugzilla ticket:

        1. Go to the Bugzilla website.
        2. As the Component, use Documentation.
        3. Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
        4. Click Submit Bug.

      The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services.

      A cluster is two or more computers (called nodes or members) that work together to perform a task. Clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types.

      High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its high availability service management component, .

      1.1. High Availability Add-On components

      The Red Hat High Availability Add-On consists of several components that provide the high availability service.

      The major components of the High Availability Add-On are as follows:

      • Cluster infrastructure — Provides fundamental functions for nodes to work together as a cluster: configuration file management, membership management, lock management, and fencing.
      • High availability service management — Provides failover of services from one cluster node to another in case a node becomes inoperative.
      • Cluster administration tools — Configuration and management tools for setting up, configuring, and managing the High Availability Add-On. The tools are for use with the cluster infrastructure components, the high availability and service management components, and storage.

      You can supplement the High Availability Add-On with the following components:

      • Red Hat GFS2 (Global File System 2) — Part of the Resilient Storage Add-On, this provides a cluster file system for use with the High Availability Add-On. GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure.
      • LVM Locking Daemon () — Part of the Resilient Storage Add-On, this provides volume management of cluster storage. support also requires cluster infrastructure.
      • HAProxy — Routing software that provides high availability load balancing and failover in layer 4 (TCP) and layer 7 (HTTP, HTTPS) services.

      1.2. High Availability Add-On concepts

      Some of the key concepts of a Red Hat High Availability Add-On cluster are as follows.

      If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent. A fence device is an external device that can be used by the cluster to restrict access to shared resources by an errant node, or to issue a hard reboot on the cluster node.

      Without a fence device configured you do not have a way to know that the resources previously used by the disconnected cluster node have been released, and this could prevent the services from running on any of the other cluster nodes. Conversely, the system may assume erroneously that the cluster node has released its resources and this can lead to data corruption and data loss. Without a fence device configured data integrity cannot be guaranteed and the cluster configuration will be unsupported.

      When the fencing is in progress no other cluster operation is allowed to run. Normal operation of the cluster cannot resume until fencing has completed or the cluster node rejoins the cluster after the cluster node has been rebooted.

      For more information about fencing, see Fencing in a Red Hat High Availability Cluster.

      In order to maintain cluster integrity and availability, cluster systems use a concept known as quorum to prevent data corruption and loss. A cluster has quorum when more than half of the cluster nodes are online. To mitigate the chance of data corruption due to failure, Pacemaker by default stops all resources if the cluster does not have quorum.

      Quorum is established using a voting system. When a cluster node does not function as it should or loses communication with the rest of the cluster, the majority working nodes can vote to isolate and, if needed, fence the node for servicing.

      For example, in a 6-node cluster, quorum is established when at least 4 cluster nodes are functioning. If the majority of nodes go offline or become unavailable, the cluster no longer has quorum and Pacemaker stops clustered services.

      The quorum features in Pacemaker prevent what is also known as split-brain, a phenomenon where the cluster is separated from communication but each part continues working as separate clusters, potentially writing to the same data and possibly causing corruption or loss. For more information on what it means to be in a split-brain state, and on quorum concepts in general, see Exploring Concepts of RHEL High Availability Clusters - Quorum.

      A Red Hat Enterprise Linux High Availability Add-On cluster uses the service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present.

      A cluster resource is an instance of program, data, or application to be managed by the cluster service. These resources are abstracted by agents that provide a standard interface for managing the resource in a cluster environment.

      To ensure that resources remain healthy, you can add a monitoring operation to a resource’s definition. If you do not specify a monitoring operation for a resource, one is added by default.

      You can determine the behavior of a resource in a cluster by configuring constraints. You can configure the following categories of constraints:

      • location constraints — A location constraint determines which nodes a resource can run on.
      • ordering constraints — An ordering constraint determines the order in which the resources run.
      • colocation constraints — A colocation constraint determines where resources will be placed relative to other resources.

      One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of groups.

      Pacemaker is a cluster resource manager. It achieves maximum availability for your cluster services and resources by making use of the cluster infrastructure’s messaging and membership capabilities to deter and recover from node and resource-level failure.

      1.3.1. Pacemaker architecture components

      A cluster configured with Pacemaker comprises separate component daemons that monitor cluster membership, scripts that manage the services, and resource management subsystems that monitor the disparate resources.

      The following components form the Pacemaker architecture:

      Cluster Information Base (CIB)
      The Pacemaker information daemon, which uses XML internally to distribute and synchronize current configuration and status information from the Designated Coordinator (DC) — a node assigned by Pacemaker to store and distribute cluster state and actions by means of the CIB — to all other cluster nodes.
      Cluster Resource Management Daemon (CRMd)

      Pacemaker cluster resource actions are routed through this daemon. Resources managed by CRMd can be queried by client systems, moved, instantiated, and changed when needed.

      Each cluster node also includes a local resource manager daemon (LRMd) that acts as an interface between CRMd and resources. LRMd passes commands from CRMd to agents, such as starting and stopping and relaying status information.

      Shoot the Other Node in the Head (STONITH)
      STONITH is the Pacemaker fencing implementation. It acts as a cluster resource in Pacemaker that processes fence requests, forcefully shutting down nodes and removing them from the cluster to ensure data integrity. STONITH is configured in the CIB and can be monitored as a normal cluster resource.
      corosync

      is the component - and a daemon of the same name - that serves the core membership and member-communication needs for high availability clusters. It is required for the High Availability Add-On to function.

      In addition to those membership and messaging functions, also:

      • Manages quorum rules and determination.
      • Provides messaging capabilities for applications that coordinate or operate across multiple members of the cluster and thus must communicate stateful or other information between instances.
      • Uses the library as its network transport to provide multiple redundant links and automatic failover.

      1.3.2. Pacemaker configuration and management tools

      The High Availability Add-On features two configuration tools for cluster deployment, monitoring, and management.

      The command line interface controls and configures Pacemaker and the heartbeat daemon. A command-line based program, can perform the following cluster management tasks:

      • Create and configure a Pacemaker/Corosync cluster
      • Modify configuration of the cluster while it is running
      • Remotely configure both Pacemaker and Corosync as well as start, stop, and display status information of the cluster
      Web UI
      A graphical user interface to create and configure Pacemaker/Corosync clusters.

      1.3.3. The cluster and pacemaker configuration files

      The configuration files for the Red Hat High Availability Add-On are and .

      The file provides the cluster parameters used by , the cluster manager that Pacemaker is built on. In general, you should not edit the directly but, instead, use the or interface.

      The file is an XML file that represents both the cluster’s configuration and the current state of all resources in the cluster. This file is used by Pacemaker’s Cluster Information Base (CIB). The contents of the CIB are automatically kept in sync across the entire cluster. Do not edit the file directly; use the or interface instead.

      1.4. LVM logical volumes in a Red Hat high availability cluster

      The Red Hat High Availability Add-On provides support for LVM volumes in two distinct cluster configurations.

      The cluster configurations you can choose are as follows:

      • High availability LVM volumes (HA-LVM) in active/passive failover configurations in which only a single node of the cluster accesses the storage at any one time.
      • LVM volumes that use the daemon to manage storage devices in active/active configurations in which more than one node of the cluster requires access to the storage at the same time. The daemon is part of the Resilient Storage Add-On.

      1.4.1. Choosing HA-LVM or shared volumes

      When to use HA-LVM or shared logical volumes managed by the daemon should be based on the needs of the applications or services being deployed.

      • If multiple nodes of the cluster require simultaneous read/write access to LVM volumes in an active/active system, then you must use the daemon and configure your volumes as shared volumes. The daemon provides a system for coordinating activation of and changes to LVM volumes across nodes of a cluster concurrently. The daemon’s locking service provides protection to LVM metadata as various nodes of the cluster interact with volumes and make changes to their layout. This protection is contingent upon configuring any volume group that will be activated simultaneously across multiple cluster nodes as a shared volume.
      • If the high availability cluster is configured to manage shared resources in an active/passive manner with only one single member needing access to a given LVM volume at a time, then you can use HA-LVM without the locking service.

      Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances. Choosing to run an application that is not cluster-aware on shared logical volumes may result in degraded performance. This is because there is cluster communication overhead for the logical volumes themselves in these instances. A cluster-aware application must be able to achieve performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes. This is achievable for some applications and workloads more easily than others. Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants. Most users will achieve the best HA results from using HA-LVM.

      HA-LVM and shared logical volumes using are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines are allowed to make overlapping changes. HA-LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time. This means that only local (non-clustered) implementations of the storage drivers are used. Avoiding the cluster coordination overhead in this way increases performance. A shared volume using does not impose these restrictions and a user is free to activate a logical volume on all machines in a cluster; this forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top.

      1.4.2. Configuring LVM volumes in a cluster

      Clusters are managed through Pacemaker. Both HA-LVM and shared logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources.

      These procedures provide an introduction to the tools and processes you use to create a Pacemaker cluster. They are intended for users who are interested in seeing what the cluster software looks like and how it is administered, without needing to configure a working cluster.

      These procedures do not create a supported Red Hat cluster, which requires at least two nodes and the configuration of a fencing device. For full information on Red Hat’s support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters.

      2.1. Learning to use Pacemaker

      By working through this procedure, you will learn how to use Pacemaker to set up a cluster, how to display cluster status, and how to configure a cluster service. This example creates an Apache HTTP server as a cluster resource and shows how the cluster responds when the resource fails.

      In this example:

      • The node is .
      • The floating IP address is 192.168.122.120.

      Prerequisites

      • A single node running RHEL 8
      • A floating IP address that resides on the same network as one of the node’s statically assigned IP addresses
      • The name of the node on which you are running is in your file

      Procedure

      1. Install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the service.

        # ... # #

        If you are running the daemon, enable the ports that are required by the Red Hat High Availability Add-On.

        # #
      2. Set a password for user on each node in the cluster and authenticate user for each node in the cluster on the node from which you will be running the commands. This example is using only a single node, the node from which you are running the commands, but this step is included here since it is a necessary step in configuring a supported Red Hat High Availability multi-node cluster.

        # ... #
      3. Create a cluster named with one member and check the status of the cluster. This command creates and starts the cluster in one step.

        # ... # Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 1 node configured 0 resources configured PCSD Status: z1.example.com: Online
      4. A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in Fencing in a Red Hat High Availability Cluster. For this introduction, however, which is intended to show only how to use the basic Pacemaker commands, disable fencing by setting the cluster option to .

        The use of is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced.

        #
      5. Configure a web browser on your system and create a web page to display a simple text message. If you are running the daemon, enable the ports that are required by .

        Do not use to enable any services that will be managed by the cluster to start at system boot.

        # ... # # #

        In order for the Apache resource agent to get the status of Apache, create the following addition to the existing configuration to enable the status server URL.

        #
      6. Create and resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource’s NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node.

        You can display a list of all available resource types with the command. You can use the command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of type :

        # ...

        In this example, the IP address resource and the apache resource are both configured as part of a group named , which ensures that the resources are kept together to run on the same node when you are configuring a working multi-node cluster.

        # # # Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 1 node configured 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online ...

        After you have configured a cluster resource, you can use the command to display the options that are configured for that resource.

        # Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-interval-0s) stop interval=0s timeout=60s (WebSite-stop-interval-0s) monitor interval=1min (WebSite-monitor-interval-1min)
      7. Point your browser to the website you created using the floating IP address you configured. This should display the text message you defined.
      8. Stop the apache web service and check the cluster status. Using simulates an application-level crash.

        #

        Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service and you should still be able to access the website.

        # Cluster name: my_cluster ... Current DC: z1.example.com (version 1.1.13-10.el7-44eb2dd) - partition with quorum 1 node and 2 resources configured Online: [ z1.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=13, status=complete, exitreason='none', last-rc-change='Thu Oct 11 23:45:50 2016', queued=0ms, exec=0ms PCSD Status: z1.example.com: Online

        You can clear the failure status on the resource that failed once the service is up and running again and the failed action notice will no longer appear when you view the cluster status.

        #
      9. When you are finished looking at the cluster and the cluster status, stop the cluster services on the node. Even though you have only started services on one node for this introduction, the parameter is included since it would stop cluster services on all nodes on an actual multi-node cluster.

        #

      2.2. Learning to configure failover

      This procedure provides an introduction to creating a Pacemaker cluster running a service that will fail over from one node to another when the node on which the service is running becomes unavailable. By working through this procedure, you can learn how to create a service in a two-node cluster and you can then observe what happens to that service when it fails on the node on which it running.

      This example procedure configures a two-node Pacemaker cluster running an Apache HTTP server. You can then stop the Apache service on one node to see how the service remains available.

      In this example:

      • The nodes are and .
      • The floating IP address is 192.168.122.120.

      Prerequisites

      • Two nodes running RHEL 8 that can communicate with each other
      • A floating IP address that resides on the same network as one of the node’s statically assigned IP addresses
      • The name of the node on which you are running is in your file

      Procedure

      1. On both nodes, install the Red Hat High Availability Add-On software packages from the High Availability channel, and start and enable the service.

        # ... # #

        If you are running the daemon, on both nodes enable the ports that are required by the Red Hat High Availability Add-On.

        # #
      2. On both nodes in the cluster, set a password for user .

        #
      3. Authenticate user for each node in the cluster on the node from which you will be running the commands.

        #
      4. Create a cluster named with both nodes as cluster members. This command creates and starts the cluster in one step. You only need to run this from one node in the cluster because configuration commands take effect for the entire cluster.

        On one node in cluster, run the following command.

        #
      5. A Red Hat High Availability cluster requires that you configure fencing for the cluster. The reasons for this requirement are described in Fencing in a Red Hat High Availability Cluster. For this introduction, however, to show only how failover works in this configuration, disable fencing by setting the cluster option to

        The use of is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely fenced.

        #
      6. After creating a cluster and disabling fencing, check the status of the cluster.

        When you run the command, it may show output that temporarily differs slightly from the examples as the system components start up.

        # Cluster Status: Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z1.example.com 2 nodes configured 0 resources configured PCSD Status: z1.example.com: Online z2.example.com: Online
      7. On both nodes, configure a web browser and create a web page to display a simple text message. If you are running the daemon, enable the ports that are required by .

        Do not use to enable any services that will be managed by the cluster to start at system boot.

        # ... # # #

        In order for the Apache resource agent to get the status of Apache, on each node in the cluster create the following addition to the existing configuration to enable the status server URL.

        #
      8. Create and resources for the cluster to manage. The 'IPaddr2' resource is a floating IP address that must not be one already associated with a physical node. If the 'IPaddr2' resource’s NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP address used by the node.

        You can display a list of all available resource types with the command. You can use the command to display the parameters you can set for the specified resource type. For example, the following command displays the parameters you can set for a resource of type :

        # ...

        In this example, the IP address resource and the apache resource are both configured as part of a group named , which ensures that the resources are kept together to run on the same node.

        Run the following commands from one node in the cluster:

        # # # Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online ...

        Note that in this instance, the service is running on node z1.example.com.

      9. Access the website you created, stop the service on the node on which it is running, and note how the service fails over to the second node.

        1. Point a browser to the website you created using the floating IP address you configured. This should display the text message you defined, displaying the name of the node on which the website is running.
        2. Stop the apache web service. Using simulates an application-level crash.

          #

          Check the cluster status. You should see that stopping the web service caused a failed action, but that the cluster software restarted the service on the node on which it had been running and you should still be able to access the web browser.

          # Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z1.example.com WebSite (ocf::heartbeat:apache): Started z1.example.com Failed Resource Actions: * WebSite_monitor_60000 on z1.example.com 'not running' (7): call=31, status=complete, exitreason='none', last-rc-change='Fri Feb 5 21:01:41 2016', queued=0ms, exec=0ms

          Clear the failure status once the service is up and running again.

          #
        3. Put the node on which the service is running into standby mode. Note that since we have disabled fencing we can not effectively simulate a node-level failure (such as pulling a power cable) because fencing is required for the cluster to recover from such situations.

          #
        4. Check the status of the cluster and note where the service is now running.

          # Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Fri Oct 12 09:54:33 2018 Last change: Fri Oct 12 09:54:30 2018 by root via cibadmin on z1.example.com 2 nodes configured 2 resources configured Node z1.example.com: standby Online: [ z2.example.com ] Full list of resources: Resource Group: apachegroup ClusterIP (ocf::heartbeat:IPaddr2): Started z2.example.com WebSite (ocf::heartbeat:apache): Started z2.example.com
        5. Access the website. There should be no loss of service, although the display message should indicate the node on which the service is now running.
      10. To restore cluster services to the first node, take the node out of standby mode. This will not necessarily move the service back to that node.

        #
      11. For final cleanup, stop the cluster services on both nodes.

        #

      The command line interface controls and configures cluster services such as , ,, and by providing an easier interface to their configuration files.

      Note that you should not edit the configuration file directly. In most cases, Pacemaker will reject a directly modified file.

      You use the option of to display the parameters of a command and a description of those parameters.

      The following command displays the parameters of the command. Only a portion of the output is shown.

      #

      3.2. Viewing the raw cluster configuration

      Although you should not edit the cluster configuration file directly, you can view the raw cluster configuration with the command.

      You can save the raw cluster configuration to a specified file with the command. If you have previously configured a cluster and there is already an active CIB, you use the following command to save the raw xml file.

      pcs cluster cib filename

      For example, the following command saves the raw xml from the CIB into a file named .

      pcs cluster cib testfile

      3.3. Saving a configuration change to a working file

      When configuring a cluster, you can save configuration changes to a specified file without affecting the active CIB. This allows you to specify configuration updates without immediately updating the currently running cluster configuration with each individual update.

      For information on saving the CIB to a file, see Viewing the raw cluster configuration. Once you have created that file, you can save configuration changes to that file rather than to the active CIB by using the option of the command. When you have completed the changes and are ready to update the active CIB file, you can push those file updates with the command.

      Procedure

      The following is the recommended procedure for pushing changes to the CIB file. This procedure creates a copy of the original saved CIB file and makes changes to that copy. When pushing those changes to the active CIB, this procedure specifies the option of the command so that only the changes between the original file and the updated file are pushed to the CIB. This allows users to make changes in parallel that do not overwrite each other, and it reduces the load on Pacemaker which does not need to parse the entire configuration file.

      1. Save the active CIB to a file. This example saves the CIB to a file named .

        #
      2. Copy the saved file to the working file you will be using for the configuration updates.

        #
      3. Update your configuration as needed. The following command creates a resource in the file but does not add that resource to the currently running cluster configuration.

        #
      4. Push the updated file to the active CIB, specifying that you are pushing only the changes you have made to the original file.

        #

      Alternately, you can push the entire current content of a CIB file with the following command.

      pcs cluster cib-push filename

      When pushing the entire CIB file, Pacemaker checks the version and does not allow you to push a CIB file which is older than the one already in a cluster. If you need to update the entire CIB file with a version that is older than the one currently in the cluster, you can use the option of the command.

      pcs cluster cib-push --config filename

      3.4. Displaying cluster status

      There are a variety of commands you can use to display the status of a cluster and its components.

      You can display the status of the cluster and the cluster resources with the following command.

      pcs status

      You can display the status of a particular cluster component with the commands parameter of the command, specifying , , , or .

      pcs status commands

      For example, the following command displays the status of the cluster resources.

      pcs status resources

      The following command displays the status of the cluster, but not the cluster resources.

      pcs cluster status

      3.5. Displaying the full cluster configuration

      Use the following command to display the full current cluster configuration.

      pcs config

      3.6. Modifying the corosync.conf file with the pcs command

      As of Red Hat Enterprise Linux 8.4, you can use the command to modify the parameters in the file.

      The following command modifies the parameters in the file.

      pcs cluster config update [transport transport options] [compression compression options] [crypto crypto options] [totem totem options] [--corosync_conf path]

      The following example command udates the transport value and the and totem values.

      pcs cluster config update transport knet_pmtud_interval=35 totem token=10000 join=100

      Additional resources

      3.7. Displaying the corosync.conf file with the pcs command

      The following command displays the contents of the cluster configuration file.

      #

      As of Red Hat Enterprise Linux 8.4, you can print the contents of the file in a human-readable format with the command, as in the following example.

      [root@r8-node-01 ~]# Cluster Name: HACluster Transport: knet Nodes: r8-node-01: Link 0 address: r8-node-01 Link 1 address: 192.168.122.121 nodeid: 1 r8-node-02: Link 0 address: r8-node-02 Link 1 address: 192.168.122.122 nodeid: 2 Links: Link 1: linknumber: 1 ping_interval: 1000 ping_timeout: 2000 pong_count: 5 Compression Options: level: 9 model: zlib threshold: 150 Crypto Options: cipher: aes256 hash: sha256 Totem Options: downcheck: 2000 join: 50 token: 10000 Quorum Device: net Options: sync_timeout: 2000 timeout: 3000 Model Options: algorithm: lms host: r8-node-03 Heuristics: exec_ping: ping -c 1 127.0.0.1

      As of RHEL 8.4, you can run the command with the option to display the configuration commands that can be used to recreate the existing file, as in the following example.

      [root@r8-node-01 ~]# pcs cluster setup HACluster \ r8-node-01 addr=r8-node-01 addr=192.168.122.121 \ r8-node-02 addr=r8-node-02 addr=192.168.122.122 \ transport \ knet \ link \ linknumber=1 \ ping_interval=1000 \ ping_timeout=2000 \ pong_count=5 \ compression \ level=9 \ model=zlib \ threshold=150 \ crypto \ cipher=aes256 \ hash=sha256 \ totem \ downcheck=2000 \ join=50 \ token=10000

      The following procedure creates a Red Hat High Availability two-node cluster using the command line interface.

      Configuring the cluster in this example requires that your system include the following components:

      • 2 nodes, which will be used to create the cluster. In this example, the nodes used are and .
      • Network switches for the private network. We recommend but do not require a private network for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
      • A fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of .

      4.1. Installing cluster software

      This procedure installs the cluster software and configures your system for cluster creation.

      Procedure

      1. On each node in the cluster, enable the repository for high availability that corresponds to your system architecture. For example, to enable the high availability repository for an x86_64 system, you can enter the following command:

        #
      2. On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel.

        #

        Alternatively, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command.

        #

        The following command displays a list of the available fence agents.

        # fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64 ...
      3. If you are running the daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.

        You can determine whether the daemon is installed on your system with the command. If it is installed, you can determine whether it is running with the command.

        # #

        The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Enabling ports for the High Availability Add-On shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what each port is used for.

      4. In order to use to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID , which is the administration account. It is recommended that the password for user be the same on each node.

        # Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.
      5. Before the cluster can be configured, the daemon must be started and enabled to start up on boot on each node. This daemon works with the command to manage configuration across the nodes in the cluster.

        On each node in the cluster, execute the following commands to start the service and to enable at system start.

        # #

      4.2. Installing the pcp-zeroconf package (recommended)

      When you set up your cluster, it is recommended that you install the package for the Performance Co-Pilot (PCP) tool. PCP is Red Hat’s recommended resource-monitoring tool for RHEL systems. Installing the package allows you to have PCP running and collecting performance-monitoring data for the benefit of investigations into fencing, resource failures, and other events that disrupt the cluster.

      Cluster deployments where PCP is enabled will need sufficient space available for PCP’s captured data on the file system that contains . Typical space usage by PCP varies across deployments, but 10Gb is usually sufficient when using the default settings, and some environments may require less. Monitoring usage in this directory over a 14-day period of typical activity can provide a more accurate usage expectation.

      Procedure

      To install the package, run the following command.

      #

      This package enables and sets up data capture at a 10-second interval.

      For information on reviewing PCP data, see Why did a RHEL High Availability cluster node reboot - and how can I prevent it from happening again? on the Red Hat Customer Portal.

      4.3. Creating a high availability cluster

      This procedure creates a Red Hat High Availability Add-On cluster that consists of the nodes and .

      Procedure

      1. Authenticate the user for each node in the cluster on the node from which you will be running .

        The following command authenticates user on for both of the nodes in a two-node cluster that will consist of and .

        [root@z1 ~]# Username: Password: z1.example.com: Authorized z2.example.com: Authorized
      2. Execute the following command from to create the two-node cluster that consists of nodes and . This will propagate the cluster configuration files to both nodes in the cluster. This command includes the option, which will start the cluster services on both nodes in the cluster.

        [root@z1 ~]#
      3. Enable the cluster services to run on each node in the cluster when the node is booted.

        For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the command on that node.

        [root@z1 ~]#

      You can display the current status of the cluster with the command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the option of the command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration.

      [root@z1 ~]# Cluster Status: Stack: corosync Current DC: z2.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z2.example.com 2 Nodes configured 0 Resources configured ...

      4.4. Creating a high availability cluster with multiple links

      You can use the command to create a Red Hat High Availability cluster with multiple links by specifying all of the links for each node.

      The format for the command to create a two-node cluster with two links is as follows.

      pcs cluster setup cluster_namenode1_name addr=node1_link0_address addr=node1_link1_addressnode2_name addr=node2_link0_address addr=node2_link1_address

      When creating a cluster with multiple links, you should take the following into account.

      • The order of the parameters is important. The first address specified after a node name is for , the second one for , and so forth.
      • It is possible to specify up to eight links using the knet transport protocol, which is the default transport protocol.
      • All nodes must have the same number of parameters.
      • As of RHEL 8.1, it is possible to add, remove, and change links in an existing cluster using the , the , the , and the commands.
      • As with single-link clusters, do not mix IPv4 and IPv6 addresses in one link, although you can have one link running IPv4 and the other running IPv6.
      • As with single-link clusters, you can specify addresses as IP addresses or as names as long as the names resolve to IPv4 or IPv6 addresses for which IPv4 and IPv6 addresses are not mixed in one link.

      Procedure

      The following example creates a two-node cluster named with two nodes, and . has two interfaces, IP address 192.168.122.201 as and 192.168.123.201 as . has two interfaces, IP address 192.168.122.202 as and 192.168.123.202 as .

      #

      For information on adding nodes to an existing cluster with multiple links, see Adding a node to a cluster with multiple links.

      For information on changing the links in an existing cluster with multiple links, see Adding and modifying links in an existing cluster.

      You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see Configuring fencing in a Red Hat High Availability cluster.

      For general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster.

      When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses.

      Procedure

      This example uses the APC power switch with a host name of to fence the nodes, and it uses the fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the option.

      You create a fencing device by configuring the device as a resource with the command. The following command configures a resource named that uses the fencing agent for nodes and . The option maps to port 1, and to port 2. The login value and password for the APC device are both . By default, this device will use a monitor interval of sixty seconds for each node.

      Note that you can use an IP address when specifying the host name for the nodes.

      [root@z1 ~]# \ \ \

      The following command displays the parameters of an existing STONITH device.

      [root@rh7-1 ~]# Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)

      After configuring your fence device, you should test the device. For information on testing a fence device, see Testing a fence device.

      Do not test your fence device by disabling the network interface, as this will not properly test fencing.

      Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node.

      4.6. Backing up and restoring a cluster configuration

      The following commands back up a cluster configuration in a tar archive and restore the cluster configuration files on all nodes from the backup.

      Procedure

      Use the following command to back up the cluster configuration in a tar archive. If you do not specify a file name, the standard output will be used.

      pcs config backup filename

      The command backs up only the cluster configuration itself as configured in the CIB; the configuration of resource daemons is out of the scope of this command. For example if you have configured an Apache resource in the cluster, the resource settings (which are in the CIB) will be backed up, while the Apache daemon settings (as set in`/etc/httpd`) and the files it serves will not be backed up. Similarly, if there is a database resource configured in the cluster, the database itself will not be backed up, while the database resource configuration (CIB) will be.

      Use the following command to restore the cluster configuration files on all nodes from the backup. If you do not specify a file name, the standard input will be used. Specifying the option restores only the files on the current node.

      pcs config restore [--local] [filename]

      4.7. Enabling ports for the High Availability Add-On

      The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present.

      If you are running the daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On.

      # #

      You may need to modify which ports are open to suit local conditions.

      You can determine whether the daemon is installed on your system with the command. If the daemon is installed, you can determine whether it is running with the command.

      The following table shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for.

      Table 4.1. Ports to Enable for High Availability Add-On

      PortWhen Required

      TCP 2224

      Default port required on all nodes (needed by the pcsd Web UI and required for node-to-node communication). You can configure the port by means of the parameter in the file.

      It is crucial to open port 2224 in such a way that from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbiters or the quorum device host.

      TCP 3121

      Required on all nodes if the cluster has any Pacemaker Remote nodes

      Pacemaker’s daemon on the full cluster nodes will contact the daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host’s network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes.

      TCP 5403

      Required on the quorum device host when using a quorum device with . The default value can be changed with the option of the command.

      UDP 5404-5412

      Required on corosync nodes to facilitate communication between nodes. It is crucial to open ports 5404-5412 in such a way that from any node can talk to all nodes in the cluster, including itself.

      TCP 21064

      Required on all nodes if the cluster contains any resources requiring DLM (such as ).

      TCP 9929, UDP 9929

      Required to be open on all cluster nodes and booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster.

      This procedure configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using the command line interface to configure cluster resources. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption.

      The following illustration shows a high-level overview of the cluster in which the cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. In this illustration, the web server is running on Node 1 while Node 2 is available to run the server if Node 1 becomes inoperative.

      Figure 5.1. Apache in a Red Hat High Availability Two-Node Cluster

      This use case requires that your system include the following components:

      • A two-node Red Hat High Availability cluster with power fencing configured for each node. We recommend but do not require a private network. This procedure uses the cluster example provided in Creating a Red Hat High-Availability cluster with Pacemaker.
      • A public virtual IP address, required for Apache.
      • Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device.

      The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will be performing the following procedures:

      1. Configure an file system on the logical volume .
      2. Configure a web server.

      After performing these steps, you create the resource group and the resources it contains.

      5.1. Configuring an LVM volume with an ext4 file system in a Pacemaker cluster

      This procedure creates an LVM logical volume on storage that is shared between the nodes of the cluster.

      LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.

      Источник: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index

      Veritas Entitlement Management System Guide

      NOTE

      Updated VEMS guide: https://www.veritas.com/content/support/en_US/article.100048764. This is dynamic document and is intended to eventually replace several documents, including this article. Please refer to the updated VEMS guide for the latest information. 

       

      Important Reminder

      If you are a VEMS administrator (you will see a gear-shaped icon in the upper right corner of VEMS if you are an administrator), please log into VEMS frequently to ensure that self-service requests are able to be submitted. If no administrator has logged into your account within the last 6 months, self-service requests will be routed to Customer Care.

      This is especially important if you have been granted VEMS administrator access as part of an acquisition, as others within your organization may need to request access to VEMS. 

       

      Introduction

      This article contains basic information about the Veritas Entitlement Management System (VEMS).  For more in-depth information, please refer to the user guides attached at the bottom of the page.

      Additional Resources

      Introducing VEMS Self-Service: https://www.veritas.com/support/en_US/article.100047250

      Adding and Removing VEMS Users for Administrators: https://www.veritas.com/support/en_US/article.100044710

      VEMS User Privileges: https://www.veritas.com/support/en_US/article.100046848

       

      About the Veritas Entitlement Management System

      The Veritas Entitlement Management System (VEMS) is an entitlement management portal that provides access to entitlements purchased from Veritas. As orders are fulfilled by Veritas, Entitlements are created in the customer's VEMS Account that allows the Users of the VEMS account to access the Entitlement information, to download software, and to generate license keys within that Account.

      Users who need to be able to view and manage entitlement information, to download software, to generate license keys, and to open technical support cases need to be able to access VEMS.

      The general features of the Veritas Entitlement Management System include:

      • Ability to view and manage Entitlements across multiple VEMS accounts
      • Ability to download software associated with Entitlements
      • Ability to generate and manage license keys
      • Ability to manage Enterprise Flex contracts
      • Ability to manage User access
      • Ability to group Entitlements
      • Who can Access a VEMS account?

      Entitlements are held in VEMS Accounts. An Entitlement can only be held by a single VEMS Account. Active Users of an Account can access Entitlements held by a VEMS Account.

      Customers are responsible for managing their VEMS Accounts and ensuring that only authorized Users have access to their VEMS Accounts. Users need access to a VEMS Account to be able to manage Users, to access Entitlement information, to download software, to generate license keys, and to open technical support cases.

      Users can be associated with an unlimited number of Accounts. Users can have different levels from privileges across Accounts. These privileges include being able to manage User access as an Administrator, being able to download software, being able to generate license keys, and being restricted to only being able to view Entitlements.

      Account Administrators are able to add, remove, and modify User access. User access may also be granted by the Veritas order fulfillment process. The Entitlement Owner veritas backup exec support number - Crack Key For U on the Veritas order is added veritas backup exec support number - Crack Key For U the VEMS account and provided the ability to access all Entitlements held by the Account associated with the order. The first Entitlement Owner User added to an Account is granted Administrator privileges.

      ShipTo contacts on the Veritas order may also be given access to specific Entitlements within an Account. Orders must be include a ShipTo Account Number that is different from the Entitlement Owner's Account Number in order for the ShipTo contact to be provided access. The ShipTo contact will be provided access to Entitlements created by the order for at least sixty days. After sixty days, an Account administrator can remove the ShipTo contact's access. For more information, see the VEMS User Guide.

       

      Accessing VEMS

      1. Open https://www.veritas.com/support in a web browser
      2. Click Licensing

      NOTE:  Users need to have a Veritas Account. New Users will need to register by clicking Register Now on the Veritas Account login page.

       

      Password Resets and Lockouts

      Entering an incorrect password three times will result in a 30 veritas backup exec support number - Crack Key For U lockout. To avoid this, please use the "Forgot Password" option at the login page of https://www.veritas.com/support.

      If your account has been locked, you may either wait 30 minutes or contact Customer Care to have your account unlocked and a temporary password issued. Customer Care contact information is available at https://www.veritas.com/content/support/en_US/contact-us.html

       

      Downloading Software

      Veritas Download Center

      Please see https://www.veritas.com/support/en_US/article.100046183.html for more information on the Veritas Download Center.

      The Veritas Download Center will still be accessible by following the existing steps for accessing downloads in VEMS:

      Software downloads can be accessed from the Dashboard, Entitlements, and Downloads tabs by clicking the Image download button. Which one to use is a matter of preference. In each tab, entitlements can be filtered by product or located by Entitlement ID.

       

      Generating veritas backup exec support number - Crack Key For U Accessing Licenses

      Licenses can be generated in the Dashboard and Entitlements tab by clicking theImage key generationbutton. Entitlements can be sorted by product, account name, etc. They can also be located byEntitlement ID, contract number, and other search criteria. Which one to use is a matter of preference.Customers will be asked to select a version and quantity when generating keys and can add notes to thekey generated.

      The License Keys tab displays license keys and files that have already been generated. Users may alsosave, print, and email generated licenses from this tab under Actions:

      Image

      NOTE: Box product serial numbers and vouchers must veritas backup exec support number - Crack Key For U be registered by clicking the "Redeem Voucher & Register Serial Number" icon in the Entitlements tab. The information required to register (serial number for box products, voucher and order number for vouchers) will be included in the packaging.

      Entitlement Status

      Entitlements exist in two statuses: Active and Replaced. Replaced entitlements are those that have been replaced as part of a substitution program. To see these entitlements and to obtain information on the corresponding Active entitlement, filter the Entitlement Status in the Entitlements tab to Replaced. Clicking on the Replaced entitlement will veritas backup exec support number - Crack Key For U the replacement Active Entitlement ID.

      Obtaining VEMS Access

      See https://www.veritas.com/support/en_US/article.100047250 for detailed information on VEMS Self-Service.

      VEMS Privileges

      There are five access levels in VEMS:  License, Download, Admin, View, and Support.

      Support access provides the capability to open technical support cases based on your account's entitlements.  It does not include the capability to view or generate licenses, download software, or administer users.  Support access does not require administrator permission for employees of the end-user company.  Third-parties will need end-user administrator permission for all access levels.  A third-party is anyone who is not a direct employee of the end-user organisation.Third-parties may also open technical support cases on behalf of an end user at https://www.veritas.com/support in the Support Cases section, but this does not provide access to the customer's entitlements. This requires the submitter to provide specific entitlement information for the end-user's active support agreement, such as an Entitlement ID, Support ID, or Account Number.

      For all other access levels:

      Viewprovides the ability to see all entitlements, but not generate licenses or download software.

      Licenseprovides the ability to generate licenses and can be added separately from or in combination with Download.

      Downloadprovides the ability to download JetBrains WebStorm 2021.1 Free Download and can be added separately from or in combination with License.

      Adminprovides the ability to open technical support cases for all entitlements on associated accounts, generate licenses, download software, and the addition and removal of users. Admins are responsible for approving access and certificate reprint requests.

      Requesting Access

      Customers who were not associated with an order may request access to their respective accounts' licenses by using the User Management section of the Self-service Tools tab in VEMS. Requesters will need to provide one of the following pieces of information to request account access:

      • Service contract number
      • Sales order number
      • Entitlement ID
      • Appliance serial number
      • Account number
      • IB instance number

      If the email address you use to log into VEMS matches the domain of an existing end-user contact on that account, Support access will be automatically granted. Requests for other privileges will be routed to your veritas backup exec support number - Crack Key For U VEMS administrators.

      Requesters can see the status of their requests in the same section.

      NOTE TO EXISTING ADMINISTRATORS:  Administrators are responsible for managing VEMS access to their respective accounts and approving incoming requests. Admins can see these requests in the same section. 

      NOTE: Accounts for VEMS may be created using a shared/generic mailbox such as"technicalsupport@companydomain.com," but a real individual's name must be used and users will not be allowed to open cases using shared mailboxes. Veritas does not recommend the use of veritas backup exec support number - Crack Key For U mailboxes for VEMS access.

      NOTE: VEMS users will need permissions higher than Support on accounts in order to see those accounts within Smart Meter. 

      Inactive Administrators

      Self-service for requesting VEMS access depends on your account having active administrators. If there are no admins who have logged in within 6 months of the request, you will be directed to contact Customer Care for assistance. If you request for access times out, you will also be directed to contact Customer Care for assistance.

      If no administrators have logged in within 6 months, you will be asked to email Customer Care with a self-authorization statement.

      Example: “I represent and warrant to Veritas that I am an employee of [Company] and I am authorized to have [PRIVILEGE TYPE] level of access to [Company's] VEMS. All previously-listed administrator account(s) are inactive due to no logins into VEMS within the last 6 months.”

       

      Changing Email Addresses

      If a user's email address changes, their admin can inactivate the username associated with the old email address and add a new username for the new email address.  For more information, see the User Guide or online Help.

      Changing Usernames

      If a username has been spelled incorrectly and needs to be corrected, please contact CustomerCare@Veritas.com.

      Generating Entitlement Reports

      VEMS users with permissions above Support can export a list of entitlements.  This report will include all entitlements in VEMS on the accounts to which the user has been added.

      This report can be generated in the Entitlements tab by clicking Export. Items in the Entitlements tab can also be filtered based on product family, contract number, etc. so that exports can be created for specific product families, contracts, etc.

      Should a more comprehensive list of entitlements be needed, please contact CustomerCare@Veritas.com for information on obtaining an Install Base Report.  NOTE:  Third-party requests may require additional end-user approvals.

      Getting Help

      For assistance with VEMS please contact Veritas Customer Care at CustomerCare@Veritas.com or by calling the number listed for your region at https://www.veritas.com/content/support/en_US/contact-us. 

      Источник: https://www.veritas.com/support/en_US/article.100040083

      Chat is available in English only.

      Chat is currently not available. Please check back on Monday when our supported chat hours begin.

      Please select from the list below to check availability.

      Only chat-supported case types are listed.

      Hours of operation: 24 hours per day, depending on agent availability.

      Only chat-supported cases are listed.

      Hours of operation: Monday-Friday: 24 hours veritas backup exec support number - Crack Key For U day.

      Only chat-supported request types are listed.

      Only chat-supported products are listed.

      Veritas Backup Exec Technical Support is extending Chat service hours to 24 x 5 (Monday to Friday) veritas backup exec support number - Crack Key For U enhanced flexibility for our Customers. For faster resolution, all low severity issues will be handled in chat, where business operations have not been adversely affected, or for enhancement requests. For responsive support, start a Chat now.

      Only chat-supported features are listed.

      Only chat-supported platforms are listed.

      Источник: https://www.veritas.com/support/en_US/downloads
      Symantec Backup Exec". backup-exec.helpmax.net. Retrieved 2016-02-25.
    37. ^"Push-installing Backup Exec to remote computers

      When I go to import, I am only able to choose an XML file vs an .SLF file.  I have valid license files from Symantec that BE won't even see.  


      Best Answer

      Elias_VTC (Veritas)

      Chipotle

      OP

      That is not a Backup Exec 2012 version of the product.  It sounds like you received licenses and information for Backup Exec 2012, but the product you should there is Backup Exec version12 (from 2008).  Two different products.  That would explain the situation.


      Go to www.backupexec.com and download the Backup Exec 2012 version of the product.

      View this "Best Answer" in the replies below »

      9 Replies

      · · ·

      Denis Kelley

      Mace

      OP

      What version are you using? I've always just keyed in my license numbers from either the email or from the Symantec Portal.

      0

      · · ·

      Josh5625

      Serrano

      OP

      I am using 2012.  I received .slf files FROM Symantec.  I have no current way of importing them into 2012.  They gave me a serial number for the "desktop and laptop" option, and license FILES for the rest.

      0

      · · ·

      Denis Kelley

      Mace

      OP

      Okay, I get you. I have a the files, but am not using 2012 yet. Let me try it on my test server.

      0

      · · ·

      Denis Kelley

      Mace

      OP

      Oopsie, I uninstalled my trial. I'd contact tech support.

      0

      · · ·

      Elias_VTC (Veritas)

      Chipotle

      OP

      If you are trying to install Backup Exec 2012, that will look for and require and SLF file.  If you do not have a license file, type in the license number, and Backup Exec will fetch the SLF from the licensing server.

      The desktop and laptop option does not use SLF files.

      0

      · · ·

      Josh5625

      Serrano

      OP

      veritas backup exec support number - Crack Key For U wrote:

      1.  If you are trying to install Backup Exec 2012, that will look for and require and SLF file. 

      2.  If you do not have a license file, type in the license number, and Backup Exec will fetch the SLF from the licensing server.

      3.The desktop and laptop option does not use SLF files.

      1.  It isn't.  THat's the problem.

      2.  Please read my posts above.  I have license files, serial numbers, but NO, I repeat, NO license numbers.

      3.  I'm not worried about this.

      0

      · · ·

      Elias_VTC (Veritas)

      Chipotle

      OP

      Can you send me a screen shot of the screen where you are trying to import the license please?   Something is not matching up here. You can either DM message me here, or email it to elias@symantec.com

      0

      · · ·

      Josh5625

      Serrano

      OP

      Here is a screenshot:

      0

      · · ·

      Elias_VTC (Veritas)

      Chipotle

      OP

      Best Answer

      That is not a Backup Exec 2012 version of the product.  It sounds like you received licenses and all adobe cc products activator - Crack Key For U for Backup Exec 2012, but the product you should there is Backup Exec version12 (from 2008).  Two different products.  That would explain the situation.


      Go to www.backupexec.com and download the Backup Exec 2012 version of the product.

      0

      This topic has been locked by an administrator and is no longer open for commenting.

      To continue this discussion, please ask a new question.

      Источник: https://community.spiceworks.com/topic/243887-unable-to-import-backup-exec-license-file-slf
      ZDNet". ZDNet. Retrieved 2016-02-19.
    38. ^"Symantec Busts Out SaaS with Backup Exec 12 Symantec Corporation". www.symantec.com. Retrieved 2016-02-19.
    39. ^"Symantec Backup Exec 10d is "Designed for Disk" and Delivers Continuous Data Protection veritas backup exec support number  - Crack Key For U

      Notice: Undefined variable: z_bot in /sites/shoppingplum.us/crack-key-for/veritas-backup-exec-support-number-crack-key-for-u.php on line 107

      Notice: Undefined variable: z_empty in /sites/shoppingplum.us/crack-key-for/veritas-backup-exec-support-number-crack-key-for-u.php on line 107

  • 5 Replies to “Veritas backup exec support number - Crack Key For U”

    1. Glad I was able to link up with you guys , thank you for helping me out!! You are the best рџ‘Њрџ‘Џ

    2. Hi Rishabh, kindly drop in your email id to help us assist you with the required data sets. Cheers :)

    Leave a Reply

    Your email address will not be published. Required fields are marked *