The Review Process

So some of you may be wondering how we run our reviews on a lot of the products we have here at Technical Fowl. Sure, audio devices and computer peripherals seem pretty straightforward, but computers might get a little weird, since there’s so many different kinds.

Every review we do is marked with the “Gear” and “Opinion” tags. We try to tag each review specifically though with the intended audience. For example, a 2-in-1 Windows Pro PC designed for mobile work is going to have the “business” tag on it. If the system is packing a high-end GeForce with an i9 then we’re going to give it the “gaming” tag – because that’s designed for FPS, not WFH. We’ll try to make sure that it’s also reflected in the body of the text.

In short, even if something we review doesn’t work for any of us high-end power users personally at TF, that doesn’t mean it won’t work for its intended audience.

And just for fun, we’ll run some of the gaming benchmarks we use on non-gaming machines too. Not because we expect blockbuster numbers, but because it’s still a pretty good performance indicator. It was our Shadow of the Tomb Raider benchmark that showed us the real difference between Intel UHD and newer systems using Iris Xe as far as what integrated graphics could do. We’ve had hardcore workstations with dual Quadro cards that aren’t meant for gaming, but can still be benched with gaming benchmarks. So to show you our process for computer testing, here’s some of the things we do:

Specs and Design

This one is pretty straightforward. What is the build like? How are the specs? Do the design elements work or not? How does it feel? Is anything missing?

Performance

We have a nice list of benchmarks that we will pick and choose from to run, but there are definitely some mainstays we like the most. Here’s our list:

PCMark 10: PCMark is an industry-standard test that benchmarks a machine for office tasks and performance for everyday activities. It gives an overall score broken down into the following elements:

  • Essentials: This covers everyday “normal” user activity like web browsing, video calls, app start-up time
  • Productivity: This rates performance with office applications like Microsoft Office or G Suite
  • Digital Content Creation: This test gives us a rating on how well a system can handle photo/video editing and rendering

 

3DMark Advanced: Something we generally reserve for gaming machines, since its primary function is rating graphical performance. Here are some of tests we run:

  • Time Spy: This one tests the processor and video card to rate DirectX 12 performance. For certain systems we also use the 4K edition, Time Spy Extreme
  • Port Royal: Port Royal is a dedicated benchmarks that measures real-time ray-tracing for any video card that supports DirectX Raytracing (i.e. NVIDIA RTX cards)
  • Night Raid: This is a new one that we started running in May 2021, meant for mobile computing devices with integrated graphics (i.e. Intel UHD, Iris Xe)

Shadow of the Tomb Raider in-game benchmark: Back in the day we used its predecessor Rise of the Tomb Raider, but right now SotTR is still an intensely graphically demanding game that supports ray tracing as well. This is a configurable benchmark to test most modern systems, gaming or not, at different resolutions and graphical settings.

This is one of our mainstays that we run on nearly all machines, or at least try to.

 

 

Final Fantasy XV for Windows benchmark: Square-Enix put out their own benchmark for this game to make sure you could even play it before you shelled out the money for it. FFXV can be pretty graphically intensive, especially with the settings turned up, and allows benching at FHD and 4K resolution with different tiers of settings.

We tend to run this one mainly on gaming machines because it can be tuned so high, but we will be folding this in on the “low graphics” settings occasionally on non-gaming machines.

 

In addition to this list, especially for gaming machines, we just, well… play games on it of different tiers to see how we feel it performs. This is usually what’s in our Steam library, on Battle.net from Blizzard, or Xbox Game Pass.

Real-world battery testing (laptops and mobile devices)

There’s a lot of sites out there that will test battery rundown with simple tests or just looping video until the battery clicks to zero. We don’t do that here, because you don’t do that out there. We see how it would last in as close to normal operation as we can simulate. We’ll use it as our work machine and run boring things like Office 365 and other web/business/enterprise applications, or do some on-site technical support and remote sessions because that’s what people do. We’ll also set aside a few hours for streaming video and watching dumb stuff on YouTube or downloading games and addons to play later.

And Finally, the Ratings

We take all of these into account – specs, design, feel, performance, and battery – and try to match that up with who the intended audience is. In that sense we’re trying to figure out if a system is doing its job. We won’t rate a business ultrabook on integrated graphics poorly because it doesn’t run games. Of course it doesn’t, there’s no GPU. But is it good for its intended audience of business users? That’s what we’re looking for.

For non gaming systems this boils down into a single rating from 0-10. For gaming systems we break that down into two scores – an “everyday” score to show how it operates in normal circumstances and a “gaming” score purely based on pixel-pushing performance.

We hope this gives you a better idea of what we do here to try to give you the information you need. Thanks for reading!

 

/TF mgmt