Web Vitals are increasingly important to how your website ranks in search results, and you can learn a lot about how users perceive your site by studying them. Gone are the days where simply having a fast site was a measure of a well built site. Erik Runyon joins us from Notre Dame this week to take a dive into what Web Vitals are, how you can improve them, and what you should be on the lookout for.
- Cumulative Layout Shift (CLS)
- Doherty Threshold
- Optimize First Input Delay
- WebPageTest by CatchPoint
- Web Vitals
- When Users Click: Tracking First Input Delay
The following is a machine-generated transcript of this episode. It will contain errors until it has been reviewed and edited, and we apologize for the difficulty that may cause for screen readers. Do you want to help us speed up our transcribing process? Consider sponsoring an episode.
Hey everybody, thanks for joining us. You’re listening to episode number 97 of the drunken UX podcast. We’re gonna be demystifying web vitals and page speed and performance issues. We’ve got special guest Erik Runyon joining us all the way from Notre Dame this evening. Ladies and gentlemen, I’m your host, Michael Fienen.
And I’m your other other host Aaron and I just learned tonight that Notre dame is in south Bend indiana uh in Chicago. Like I thought it was,
I learned that it was there and not in Nevada. So I don’t know what that says about me. I lived in
indiana for a long time and I don’t know why I always thought that Notre dame was in Chicago. I didn’t realize it was indiana.
If you want to make fun of us, which I highly encourage you to do um you should come and poke at us on twitter or facebook at slash drunken uX or instagram at slash drunken uX podcast. You can also join us on our discord at drunken uX dot com slash discord. Uh make fun of us. Put us down because we don’t know where colleges are and uh you know, there’s just there’s, there’s nuggets of like trivia that if you never have to handle it in your life.
You just don’t know and I guess I just never needed to know or Notre dame was yeah, before we get too far into things. If you are enjoying the drunken New X podcast, be sure to run by our sponsors over at Hi Ed Web now. You have heard us talk about high end with many times they put together a fantastic annual conference every year for higher ed Web development professionals, marketers, accessibility experts and a whole lot more.
Their conference this year is all online. It is october 4th and fifth and their keynote presenter is Shannon cason from NPR so be sure to check them out. You can go get registered by visiting drunken uX dot com slash H E Web 21. That’s drunken ux dot com slash H E Web 21. Um Aaron, I need you to save me from my trivia. Black hole and share with me what it is you are imbibing this evening for this conversation of performance. Uh
nothing interesting. I just have a coke and by that, I mean coca cola
Yeah, I need to make a run to the store soon but I’m traveling this weekend and haven’t had a chance to go to it yet.
So I have a cola as my stand by drinking a Pepsi man for the most part. So I’ve got that kind of to the side. I am still trying to clear space on my bar so I’m, I’m not going to finish it tonight. But I’m going to put a pretty serious dent in it. I’ve got the end of my bottle of Martin Locke 16 so that’s, it’s a space side scotch. I love this eight. The bottle is beautiful. Um but it’s also known as the Beast of Duff Town.
Yeah, and so I I it’s just, I feel kind of like it’s my my uh my Simpsons scotch, you know like uh why? But it’s, man, it has about the best knows of any scotch. Uh very floral, very fruity. It’s like when you pour it in the glass, you know, you have poured a glass of scotch uh flavor wise, it’s like not anything like crazy out of this world, like it’s not bad, it’s not like great, it’s good, it’s enjoyable, very sweet.
It’s, you know, it is a space side, so it is sweet. Um but I I’m hoping this is the distillers dram editions, the 16 year. I’m hoping I can find another bottle of because it is good enough to keep like on the shelf. So we’ll see, we’ll see where that goes. Um Erik, I’m gonna turn to you now you lifted the glass and I, at first, are you drinking tea? I see something in the glass, that’s what I can tell.
So this is an it’s uh an amber rum from a local distiller actually here in town. So this is sort of an old fashioned inspired mix that I made tonight, so it’s the amber rum uh instead of using a sugar cube, I’m using michigan maple syrup, grab some mint out of my garden and then a couple splashes of bitters and some soda and it actually turned out pretty well.
I’m not gonna lie, I don’t know that it sounds super appealing, but at the same time I’m thinking rum and like maple syrup,
it actually turned out better than I thought
I could give that a shot. Like I’ve got a bottle, I haven’t in fact that maybe I’ll have that for the next episode if I, if I remember I’ve got a bottle of Ron Zacapa Xo um, so that’s the
moment that you’re seeing in there.
Oh, the mint is what I was saying. Yeah, I thought maybe it was whiskey stones. All I could see. It was like something um, so Erica the voice, the third voice that you’re hearing is Erik Runyon. He is the Technical director for Marketing Communications at Notre Dame. Um he is also as we have referred in the past. He has slide number four on, should I use a carousel dot com?
Thanks to his, some of his research, they broke down just how little people use the slides on caress cells and so we have referred to that and many people have referred to that research over the years. Uh A welcome to the show, Erik, I want to the talk for just a quick second though because you know, technical director, but if I can’t I can’t talk martin locke is taking over my tongue. Um Technical Director for marketing Communications at Notre Dame.
Sounds impressive. I have no doubt. It absolutely is impressive, but I also doubt that you started your life that way. Um I just wanted to ask like how like what was, what was it that got you into like web development, web communications and like where, what did you start out as that got you to where you are now?
Yeah, so like you said the title is pretty long. Um but in all honesty it just means a developer who’s been there way too long.
So getting started anyway, I started learning web development in 95 as an undergrad in college. So my first text editor was Picco in a unique story. Um And so my the first website I was building my uh work for my sisters who were overseas doing mission work. So they would send me their newsletter, I would take the newsletter and turned it into web pages. So that way we could get it out to more people.
I don’t know if there was such a thing as professional web developers at that time I was going to school for uh it was the music technology degree program. So once I got out of college, my first job was doing audio design for computer games for a company and herb in an arbor
like like chip tune type stuff, you
know, it was like full on sound design, audio. We did a lot of Tonka games, Rocky and Bullwinkle and stuff like that. We’ll keep brookie. That was actually pretty good. Thank you. So after that, I was only in that job for like 10 months before going on to a job, there was more graphic design, a little bit of programming.
But then I also manage the company’s website and that was definitely the days of font tags, you know, putting color as a color attribute in line any time you want to update a font you’re finding and replacing in the document. So after that I was doing, Believe It or not, I was doing wall border design like colour matching and designing wall borders for Rvs. Um like
like the stuff you put on the, the roof
like this along the wall, it’s sort of like in between the lower wall and the upper wall. Yeah, so that was the that was the businesses primary um primary business. But they also needed someone to build their first online catalog, which they did not have. So as part of the process of hiring me in there, there were since I had web experience or like, would you be able to do this? And I’m like, I’ve never done it before, but I could certainly learn it.
So I spent that job when I’m also doing the design work learning PHP and my sequel in order to build them an online catalogue and, and I quickly realized that that was the part of the job that I really loved was when I was working in the web stuff now when I was doing the design work. So after a couple of years there I started looking for my first full time web development job, but I really liked the work that the Notre dame web team was doing.
So I tracked them down and started applying for their open positions. So when one of their developers left, I applied and got in. So That was in July of 2017 and I’ve been through, we’ve been through um probably four different roles technically with the team Since then. eight different offices. Um but still the same team the entire 14 years.
Very cool. Okay. We’re going to see how this plays out. Uh We wanted to talk about page performance. Now this is a subject we have covered in the past, but we wanted to take a slightly more specific dive into a couple areas. Um specifically it is applied as it applies to google and those are what is page speed and and why should I care about it as a developer and then where we’re really gonna dig in, are these new things.
This new deal that that google has decided is going to be important for the future, which are web vitals. Think of them kind of like just like any other vital in medicine or something like that. There’s sort of a baseline set of metrics that help elaborate on how good is your site, How fast is your site, how usable is your site? The interesting thing about web vitals is this notion that they are kind of using them as a proxy measurement for usability.
Uh They’ve decided that these three different areas represent key things to making sure your site is usable. So we’re going to dig into those what they mean, how they affect your site and how you can measure and improve upon them. Um
Yeah and it’s definitely relevant right now because google is in the process of making core web vitals part of the search algorithm. So if you have a fast site it ranks well on mobile then you’re more likely to have your content boosted in google properties like google news and whatnot. Especially if your news articles meet google’s formatting requirements. So they were planning on rolling it out earlier this year.
Um but there was a slight delay. So it’s actually I believe in process right now and it’s supposed to be in placed by the end of august I want to say based on the most recent article I’ve seen
by the time you’re listening to this episode, it may very well already be out and rolling at that point. So the the one phrase that a lot of people will probably be familiar with is this phrase page speed.
Not in the abstract sense, but in the sense of the actual thing that google creates called page speed um and you can get the page speed insights report in the, I think it’s in the google search console, um page speed for what it’s worth, this is sort of an always moving target, right?
Uh um if you look into how things are measured and how you’re scoring on different things, you can usually go into your web developer tools in chrome run Lighthouse and we’ll talk about Lighthouse here in a bit um and get a number of scores back basically. Um now what page speed does has changed over time and now correct me if I’m wrong on this Erik but isn’t page speed now basically just a combined abstraction of all of the web vital scores, isn’t it?
Just those combined and sort of remarks or am I thinking about that wrong?
I’m not sure if they specifically refer to it as page speed. There’s a page beat insights is one of the tools that they provide that’s based on um based on web vitals and core vitals specifically. Um you can find them like you mentioned in the chrome uh chrome deV tools in the Lighthouse tab and we, we probably should get into how the scores on each of these tools are generated.
Um Sure, because you have things like Lighthouse which are are strictly strictly lab data. So that means it runs on the machine at that time using particular specs um and it’s not representative of real user experience. So if you go to PHP it incites the nice thing about that one is it will give you their lab data but if the source domain that you are testing has enough traffic, they will also give you a breakdown of some field data.
And that is data that comes from the chrome user experience. Its crux, that’s the crux data, sierra U. X. Um and so that’s collected only from chrome browsers by people just naturally navigating sites and it’s not any chrome users. So like if you have there’s um there is a setting in chrome that allows you to submit, you know share your browsing data with google and if you have it turned off it’s not going to use your data.
But if you do then it goes into that crux report and that’s that’s the actual field data, that’s real users using a site. So one thing you may notice if you use something like PHP IT incites versus lighthouse, you’ll see field data between meals to see lab data and sometimes the values and those can be very different partly because they’re they’re measuring some slightly different um slightly different things.
So with lab data you can’t measure when a user interacts with the site. So um one of the core red vitals is first input delay. Uh I suppose we should list out what those are first. So you’ve got largest content for paint, you’ve got first input delay and then you’ve got cumulative layout shift. So
you’ll see all of these as acronyms very frequently and there are more than these as well. But anytime you see something referring L C P F I D C L S that’s what Erik just described. And there’s like I say, there are several more than that as well.
Um, but we should probably take a step back and look at some of these core would vitals on their own first.
So, well you you mentioned uh let’s let’s start with LCP. Right? Um not SCP s those are weird and creepy. Um, LCP is you mentioned largest content full paint. Right. This is, and I’m going to quote google on this as far as what that means. They say LCP is on the metrics report is the render time of the largest image or text block visible within the view port relative to win the page first started loading.
So it’s basically when is the biggest thing that you can see finally available from milliseconds, zero milliseconds, whatever.
So if if you um, it’s easier to see these sort of things if you’re looking at a, at a film strip of a page loading. If you use something like uh, web page test dot org and you can look at the film strip,
so lighthouse in chrome actually will give you one as well.
Yeah, I wish you could blow it up. It’s always so tiny. It is hard to see what, it’s hard to see what’s actually going on in those
GT metrics I think does it as well. And there’s, I think is a little more like they, they overlay the bars of each element as it renders out. And I think that’s kind of nice.
So as we start to dive into this. And one thing I want to make clear is, I am not a performance professional. There are people whose job is performance all day long and they know the stuff inside and out. I’m just a performance geek. It’s part of my job. It’s something that I have given my druthers, I will wander off and play with performance instead of something else that I probably should be doing.
Um, so I definitely dig in and try to learn as much as I can, but if, if I get anything wrong in here, feel free to jump into discord and you can yell at me too along with the other two.
Yeah, yeah. Always correct us if we’re wrong. Um, we’re a couple guys. So that’s all right.
So for largest content for paint would be um, like if you think of your standard homepage would be like the hero image, especially if you’re on mobile and it’s taking up, you know, two thirds of the screen or three quarters of the screen. Um Previously we had metrics such as like dom content loaded,
That was something that we often used as a metric for to know how quickly a page was rendering the advantage of LCP has is that it actually focus on what the user actually sees on their screen. Um So it’s more of a user centred centered metric. And so the way the LCP works and it’s the same with all these core web battles, they each have three essential scores that they can get um and they’re good needs improvement or poor.
And so for largest content feel pain if your page loads in 2.5 seconds or less, like the largest content will pay Hampton 202 and half seconds or less than is considered good between 2.5 and four your needs improvement and anything above that is poor. Um and so as far as large as content will paint goes, it’s it’s important to optimize forgetting um the most important thing on your screen up as quickly as possible for your users.
It’s this line and you mentioned like our events that we have access to. Right, so dom content loaded are dumb. Content loaded is very programmatic, opaque black box behind the veil kind of thing, developers get it. We understand when that fires why that fires but a user has no clue and the perception of the page and I think that’s going to be one of the sort of the running themes of web vitals and and how they’re sort of aligned with usability.
Is there high tide somewhat to this notion of how how does the user, how quickly does the user perceive your website, you can make something feel and look very fast even if it’s still very busy in the background doing things. And this is one of those examples.
You can get something painted to the page very fast even if you’re still binding events to different things in the dom you’re still rendering out dynamic content or making api calls all of that stuff can still be happening even though to the user they think it’s done
right. Exactly. And I guess the overall theme for corporate vitals is user experience. Like how can we make this better for the end user?
It is I think worth pointing out to L. C. P. Is very narrowly scoped. It’s not just anything on your page. It specifically applies to image tags, um Image elements, video elements um any element that has a background image that is loaded via a U. R. L. Function. Um This is opposed to like a CSS gradient or like a data encoded image in your CSS. Um And it can also apply to any block level elements that contain any text.
So we’re talking paragraph ties, you know div things like that. So it’s there there is a very sort of limited scope as to what will affect an LCP score will trigger an LCP score. Um So thinking about that we’ve got a limited set of things? We know we need 2.5 seconds. That’s not fooling around. I mean that’s fast.
Um You know what what’s uh what do we figure the average load page pate time is these days it’s like eight seconds, nine seconds um Something in that area 2.5 seconds is super fast. Yeah. What does the does the clock starting
begin when like when the request comes back or when the cross is issued? Like when do you when you start the clock
the clock starts? I think from the minute the browser receives its response and starts working.
So it’s when it’s when the response hits the browser, not when the browser issues the request initially.
I’m oh that’s what you’re saying. Uh Honestly I don’t know the answer to that and I’m not gonna I’m not gonna fake it. Um I don’t know. I would have I I would presume it’s probably cause most things you know network wise usually happen at the request time. Right? So like because if you’re sitting there waiting that that wait time factors into any you know performance metric normally. So
I love that question. I’m making a note of that
I’m asking because if we’re measuring the content will paint of the page speed itself. Then it should just be like okay you have browser, you have the information you need go and then how like how does it do at that point? If you factor requests that were also factoring in like server and host speed.
Well, but I think that that absolutely matters because it’s all infrastructure. Right. And the user, here’s the thing. And to go back to this notion that this is all about you x and usability. The user has no clue which part of that is slow and they don’t care.
That that would be so that that certainly would be my assumption on that. And if anybody knows offhand, feel free to shoot us a tweet and verify that. But I would bet that that’s got to be from the minute the network traffic starts, the clock starts and it would be my guess if if I wanted to be less lazy I could probably just open up my network tab right now and load a page and just literally watch it load and know the answer.
But I feel like that’s a safe assumption at that point.
Yeah. And as far as 2.5 seconds being fast I keep a list of 1700 69 High Red Home Pages. But I like to test against just just for the fun of
course but we all I mean I keep that in my wallet
and so I actually ran those through uh there’s a tool called speed left thigh by Zach
um and so I ran all of those sites, I ran them all through speed lo fi. And out of all of those sites, 0.51% scored good on performance.
Mm What can we do then? So if we want to get our LCP score down, let’s say I’ve got a four second LCP and I’m like, man, I’m right on that edge of being poor. I’d like to get that closer to that nice green number. What can we do to improve our largest content for pain?
So to improve LCP, it’s pretty much all of the same things you would do just to improve overall page performance. Just with a slight focus above the fold, quote unquote. We all know that that’s sort of a not a real thing, but it’s it’s it’s it’s it’s still something that you can sort of think about when you’re when you’re creating a website, especially when you want to focus on LCP.
One of the things though about that, even though the fold is a myth, what is or is not visible to the user isn’t. And now that we have the intersection Observer ap we can do do clever little tricks like say, you know what, let’s not try to request all the images that aren’t visible to the user yet. We could defer those or, you know, lazy load, lazy
loading. So that helps
those. Yes. Then those aren’t competing for http requests and bandwidth for that user, especially if they’re on like a bad mobile uh network or something like that.
Yeah, exactly. So I’d say first thing you want to do is make sure that you start with the server request itself. Um you know, you’re the longer your server takes to getting around to sending something back to the client that’s going to affect your score. That’s the time that the user is sitting there and nothing’s getting paint into the screen. Once you do send data back, try to keep your initial payloads small.
So like your html if you can keep that below 14 kilobytes, um they can do it in a single trip and not have to do multiple round trips to get your html keep your and you expand on that.
I think I know what you’re referring to. But could you elaborate?
Yeah. So um when data is sent across the network it’s chunked and you can see this. If you look in a web page test, you’ll see colored bars on on the individual requests and where you’ll see darker the
network tab. Right? The network tab of web developer tools. Yeah.
So if you look at like the html request um you’ll see lighter and darker bars and what those are. Is it’s like each of those chunks, it’s like downloading and then waiting and downloading so that the file is coming down in pieces. And so for your html your initial response of just two. Html should be under 14 kilobytes that allows it to go in a single burst um and it’s not doing multiple trips to download that html file,
which is huge mind like that, that’s 14 kilobytes. G zipped as well. So usually you compress web pages that’s called G zipping. Um So uh the html itself can be, you know, easily four or 5 10 times that size before G zipping.
Thank you for lowering that because I think this is like a really un intuitive, Like it’s not an obvious thing, right? Like there’s not there’s nothing in the browser or in the code that implies some kind of like 14 kilobit boundary. Um But it’s like it’s an excellent idea for like optimizing your page performance by the way.
I looked at the thing earlier about when page loading starts and I I don’t have like this isn’t my final answer, but what it looks like um is they’re using the time origin value of the page. And according to the W three C I’m Origin. If the global object is a window, I’m origin must be equal to and there’s a series of options. But the first one is the one the browsing context was first created and the browsing context is like the document object creator.
And so I guess if I had to guess just from what I’ve read literally in the last five minutes. Um I think that maybe it’s when the document object node is first available. So I would guess like when the page starts loading, like when it first gets that initial burst of html back, I could be wrong and if anyone out there like it does actually know the specifics on this. I would really appreciate knowing
Sorry, just fake it force your force your network to be slow and just see if the, what the score is that comes back. You have to be slow a few things. Some other stuff you can do, I think right is and Erik you’re absolutely right when you say like yeah, just the stuff right.
The stuff you do to optimize the site is what will help you here make sure like you if your masthead images a photograph Don’t send over like an un compressed tiff or something as a background image, you know, like making a Jpeg cut it down to like 70%, you know, quality, like make it reasonable.
Here’s a tip on your website regardless.
Yeah. So any time we do a hero image, you know the large banner images or whatever We do it minimum three different sizes um in order and send it down as a source set so the browser can choose a smaller size. So also do not put lazy equals loading on your hero images. Anything that may appear in that theoretical above the fold, you don’t want to have lazy loading on that.
It needs to be prioritized,
And those are the yeah, the html is nothing by comparison in those cases then quite frequently.
Um you know, just just really focus on the things that get that first at first pain going, especially the larger part
and the other way to get that happening as quickly as possible is to make sure you’re leveraging, cashing and make sure you’re leveraging content delivery, network Cdn.
Um if you’ve got a bigger site or a site that you know, doesn’t have a lot of resources, you can, if you can offload html and imagery to really fast networks that have it pre cached at nodes close to the user, these are all things that make sure that person gets that information as fast as possible so that it can get painted as quickly as possible.
Yeah, I’d say the only thing to be careful about that is how many domains you split up your content. Um you know back in the HTTP one days we would uh see a lot of sites splitting up resources over multiple domains in order to give more resources downloading at one time. And with http to these days that’s actually an anti pattern. As many things you can keep on your primary domain the faster.
So especially if you could serve your html over Cdn. That’s even better. Um But if you do have to have like multiple domains to serve something and you can do a pre connect in the head of your document to get that initial connection handshake going sooner before the resources actually found by parsing.
Okay. Web vital number two this one is actually one of the weirder ones, it’s probably the one I like the least because it’s a little confusing, shall we say? That’s first input delay. F I. D. Um Again from google they say F. I. D. Measures the time from when a user first interacts with the page I. E.
That when you look at things like the insights report, you have access to two sets of data in some cases some of it is the lab information and then the others are the the user data and F. I. D. Is very much meant to be the the the user side of this. Um This one’s weird,
right? So as I mentioned earlier between the synthetic like the lab data versus the rum data, first input delay does not exist in the lab. Okay. It doesn’t exist. No it does not. You only get this in uh specific tools. Um So like the crux report, you’ll get it back in PHP insights uh google what school? Just google web master, what is it called now?
Oh is it just the web console? Just google. Search console.
search console console stuck in my head.
Yeah so search console reports on these as well. But yeah so when you’re dealing with lab data where it’s all Cynthia synthesized instead of first input delay you would be looking at something more like time to interactive or total blocking time instead of first importance because they’re the closest corollary metrics that they have that you know is similar to first input delay because you can’t really fake first input delay because it actually requires a user to interact with the page whether it’s clicking or tapping?
Um scrolling doesn’t count. Um So yes when you were dealing with just lighthouse in the browser. Um You’re not gonna get first input delay
but but you will get T. T. I. Or T. B. T. Correct, just let us know if we’re throwing too many acronyms that you guys listeners. Uh I told you a lot of them out there for this episode.
Tell us now before that we finished recording.
allowed at your listening devices we
finished recording. You’re out of luck
if we haven’t we haven’t done it then you haven’t said it loud enough so it’s really loud.
One of the things I find really interesting about first input delay, Google says like to have a good you know a good score, the first input delay has to be 100 milliseconds or less than 1/10 of a second. Um The reason this is kind of interesting at least to me is That’s only 25% of what they call the Doherty threshold. Um Which this is like an old school computer science type concept that goes back decades and decades.
Which is this notion that for something to appear immediate simultaneous to the user the reaction of the system has to be 400 milliseconds or less. And so it was interesting to me that google’s like oh yeah but to be to write good you have to be under 100 milliseconds because I feel like the user can’t tell at least in theory the user shouldn’t be able to tell so I don’t know what to make of that entirely.
I don’t know Erik if you’ve got any thoughts on that but that’s just that just struck me as funny anyway.
And so what flask fast click did was that we would move that delay and immediately execute whatever it was that the person was tapping on To remove that 300 millisecond delay because it was noticeable to users. You know that that very minor delay that safari had in place was noticeable. So this was uh methodology to get around it and it’s not as far as changed the way that works. So it’s not necessary anymore.
Um but that was something where you know, it’s a very small amount of time but it is something that users noticed enough that there was a developer work around.
I’m writing it all vanilla Js which means I’m writing a lot of like add event listener type stuff and I start looking at like as I organized my code, I try to put that stuff kind of together and I see how many event listeners I’m adding and that’s the stuff that we’re talking about here. Think about how much are you using java’s for your menus for dropdown behavior.
I know it just always like I always pictured like drinking water and stuff just Yeah,
Um So like scripting related CPU time in milliseconds sort of things.
Um I’ll throw that article, tim’s article in the show notes as well. If anybody wants to go go read that, definitely check that out. Yeah. So let’s talk about F. I. D. In terms of how do I reduce it? How do I reduce the time reduce rather the delay to first interaction? Um Obviously I think everything we’ve already said about largest content will paint applies here.
You know, again good performance techniques are going to help you know what’s what’s the phrase right? Uh you when the tide rises that it lifts all the boats. Um There’s some clever phrase rising tide lifts all boats, That one.
Yes that that phrase by improving based performance, you know technique you’re gonna inherently kind of affect all of this that ones I think kind of their what uh what would you do Erik if if somebody came to you and said hey we we ran a test, you gotta search box on the main page of the Notre dame website.
And then everything else can be deferred down at the bottom of the page. And because for like you don’t need all of our lead form code validation error handling because you’re not going to be able to fill out a form quickly enough and get to it before that’s needed. So that is something I can offload and say don’t load that and run it yet. It’s not even necessary on every page.
So let’s let that weight and linger a couple other things you can do is, you know obviously make sure things where you can or asynchronous so you can, you know, if you’ve got fetch tasks, api calls things like that, make them asynchronous, you’ll shorten the time that it takes for things to to load because then it doesn’t have to wait linearly for them.
Oh, this is this is the one that I probably care about the most because I find it the most annoying.
Yes, this this is definitely one that everybody has ran into, even if they don’t know they ran into it, but if you, you will know the problem uh again, for the last time quoting google CLS is a measurement of the largest burst of layout shift scores for every unexpected layout shift that occurs during the entire lifespan of a page. What you know,
I wish like I know that this is supposed to be objective and everything, but certain way out shifts can be small but subjectively more significant. Like the layout shift that when like the facebook header is loading and all of a sudden an icon bumps the notifications thing to the right slightly and then you tap that when you may tap something else. I swear they do that on purpose.
I read that definition and it’s terribly hard to understand. Um, CLS is basically an abstract measurements between zero and one um and if you want to know the math, I am not going to go into it because it is complicated and weird. I’ll have a link. They have a page that explains like how you calculate it based on fractions and the amount of you know, pixels things move relative to other things. There’s a whole equation and it’s not fun. You don’t need to know it.
It’s not good podcast content.
Yeah, it’s not good podcast content. That is a very excellent point essentially
boils down to things moving around pushing things different places equals bad.
Yeah. Users fixate right? When you see a page, your eyes lock on things and we’re used to this notion of an image placeholder that’s 6 40 by 4 80. Right? And if the browser, if you put an image height and width on that, whether on the tag or at least in CSS the browser knows to hold that space that something is going to come in there, that’s that size.
And so when it loads, it’s not disruptive to the users like visual threshold on that page because the for instance with text, right? You could already be reading the page in some cases before like a big image loads and if you’re trying to read that page and the image loads and then shoves all that text around. Suddenly you’re jarred out of that flow and that’s disruptive. I mean that’s and that’s you know, a nice way of putting it.
But thinking about improving that value, I mean this feels easy, doesn’t it? I mean
So if you do put within height it will, you know, I know that something is coming in on that spot. So as you’re scrolling is not shifting things around so that’s that’s an easy one to do, you know, put the width and the height on things
you could, you could even do men with and men with and midnight ensure that it’s always like a minimum of a certain size. If you don’t know exactly the size it’s going to be but you know Percentage is always going to be at least 200 pixels
that at least reduces the amount of shift then. So you’re, if you can’t avoid the shift, you can at least minimize the shift which is perfectly valid. Like I’m sure there are situations where maybe your CMS doesn’t give you the information about the image you know to work with and it’s a user uploaded image.
Yeah. And the thing to keep in mind about CLS is it’s the cumulative like through the lifetime of the page. So it’s not just you know like first content for paint where it’s like boom this is now what your measurement is. It’s like as the user scrolls down the page,
they even specifically say like you it’s reasonable that sometimes you have to insert something into the page but you should always try to insert it after things not before things. Um that helps reduce that score as well. And something else that I thought was interesting was um use animations where you can, because the animations help reduce the amount of shift from frame to frame. So you don’t go from nothing to a huge something.
And that helps reduce the overall cumulative score because the way they calculated based on ratios if your frame, the frame movement is smaller even if you move the same distance overall, the cumulative score is lower because the change between ratios from frame to frame is smaller. Again, the math is super weird. Uh If you know you’ve got to have a banner ad, if you work on a content site and you have ads then You very likely know how big those ads are.
So light houses the tool that exists in chrome deV tools. It will give you not just a performance score. Um but it will give you other things like accessibility, best practices in S. C. O. And you can also do a P. W. A. Test which is a progressive web app. It will test how well you’re site meets google’s um criteria for progressive web app.
The interesting thing about Lighthouse right is it’s also available as a command line tool and so if you wanted to script it, if you wanted to write your own tool to maybe test a certain thing on a schedule, you can download the cli and build it into a node script or something like that.
Yeah. And it’s used by a lot of external tools like we mentioned earlier. Speed like I use is uh uses the that’s a P. I.
Yeah so speed defies an interesting one because I threw it on my list as well. Um speed lo fi uh is the product of Zach Leatherman. Um It is, it runs in 11 t it’s a static site generator and it basically lets you produce a website that gives you the ability to track all of these things over time to see how your change. I use it personally.
I’ve got my own server set up and I have a bunch of my work site set in it, I’ve got a bunch of my personal site set in it and I just let it run. Uh It’s very cool though because it gives you it does give you all your lighthouse scores. It gives you all your core web vitals, everything we’ve talked about LCP, they have CLS they have all the other web vitals first content, full paint and all of that.
Um Erik you mentioned, you know, being able to see like what does your site look like? Two seconds in five seconds in what changes, where does stuff move things like that? Yeah,
so just the google tools, tools alone, you’ve got the crux report, search console, page bit insights, lighthouse um web dot de slash measure is another one. Um but as far as an external tool, my favorite definitely has to be web page test dot org. Uh you know it’s a free tool and it just gives you so much information you can dig down through and shape your traffic, you can tell it to block specific scripts.
So one of the nice things you can do is if you want to test how your site would load with without some third party script, you can tell it, you know, block all requests to this domain and it will run the test and just block them. That’s cool. Another one is a single point of failure test.
Um What is the there’s a site, it’s like black hole or something if you send a request to it essentially just times out it like spins and spins until it times out so you can like put your vendor scripts in that and it will, you can see what kind of effect you have if the if that script is going to time out another thing that people can,
I’ve seen do with web page test as you can test changes so you can use remote server functions to actually modify the content of a site that you’re testing in order to see how some changes would affect your overall score without actually making changes to page itself. So if you wanted to say add a within height to the images, you would write um a rejects or something to add those. Um
and and as you were talking, I went ahead and plug drunken UX into the web page test. And man, I don’t want to tell you what we got. All I’m gonna say is I’m really glad I’m doing a lot of work to redesign this site, make it fast. So
I loaded that up before the show too. So you don’t have to tell me,
oh it’s not good folks. Go plug your site in the web page test dot org. See what the results are. And while you’re doing that we will be right back. T
Erik thanks again for taking so much time this evening to sit down with us and go over this. Um, I know we we threw a few questions your way that I did not prep you for.
So I commend your ability to think on the fly and whether my whims as it were uh as uh absolutely judicious payment for your time and suffering, please take that microphone, take three or four minutes whatever you need and tell folks where they can find you what you got going on and literally anything else that you want them to know.
So pretty much everything that I’m up to. You can find on my website Aaron Runnion dot com. That’s the R I K R U N Y O N dot com.
Uh No, the school that I went to up through my freshman year of uh great school still spelled with a C. And There were only 30 kids in my class. It was a town of 800 and they still couldn’t get it right. So all of my socials and whatnot. My upcoming presentations are listed there. Uh Well, I’ll actually be talking about performance at Hyde web this year. Prerecorded once again unfortunately.
Um Hopefully hopefully 2020 to hide. Well we’ll be back in person. But yeah, I mean that’s pretty much all I moved to is putting together some performance presentations and very, very rarely writing content for the blog.
Mhm. Grand. My my site that’s running on jungle through web page test and LCP is 1.5 seconds And CLS this .06.
That’s pretty good. Pretty darn good. I don’t think we mentioned it, but .25 is the threshold on CLS for good. So anything below .25 and you are. Yeah, doing it right.
Yeah. Aaron, my my website is also Jekyll.
It’s nice. Right?
Yeah, it’s ruby. It’s gotten liquid. I mean, markdowns all the good things.
Yeah. I after using WordPress for well over a decade. Um It was a scary shift to move towards uh stock side generators. But man, I love it, love it. Love it. Um You can tell us how much you love statics that generators or how fast your site is uh connect with us on twitter facebook dot com slash trump U. X. And it’s like a ghost dot com. So I struck in your ex podcast or on chat. Discord dot com slash token you X.
Or the other way around
this core dot com slash junkanoo X. Right.
The other way around it sounds good, but it’s definitely the other way around, you know, And as I’m sitting here talking about you caught me, I’m, hey, I’m listening. I’m, it doesn’t matter. I I’m through my morlock but boy, I know my links.
No, I was just sitting here thinking about as you talk about like static site generators and Jekyll and performance and how that affects things Erik And I were talking right before the show started about stuff and I got when we start thinking about like page speed and
yeah, yeah, I know, I know what you’re referring to um because you know, a lot of groups aren’t going to give you the leeway to spend time on this sort of thing. So we’re in this maybe especially relevant to you is um, if you want to consider making an argument to invest time and resources into improving performance. The best way to do that is to keep your personas close, but keep your users closer.
that. I don’t know how to work that out beforehand. But
you got me hey Aaron, Yeah, bye bye.
Visit HighEdWeb.org to learn more and register for the conference.