I'm developing a PHP application written using the Laravel 4.1 framework. So far I only have a few MySQL queries per page, many of them are cached using the Cache class where possible, using a Redis server.

Currently I'm developing locally with an internal (but not localhost) MySQL database, using Apache 2.2.24 and PHP 5.4.17.

Using Chromes Developer Tools, I'm checking the Network tab to see how long the page is taking to load etc, but I'm seeing some weird results. The page spends a long time waiting for the content to be served, as you can see below:

Network output

As you can see, the new page takes 682ms waiting for the content to be sent back to the browser. Is there anyway that I can improve this? Why does Laravel has such a big overhead?

Apart from a custom Facade that we use to make using Entypo easier, there are no extra packages except the defaults that come with Laravel.

Does anybody know how this can be improved?

 
1 
Have you use a PHP profiler to have a look what's taking up the most time? We've used NewRelic and that's been a great help to pinpoint slow SQL queries and slow code. They offer a free 14 day trial. You could also use something like XDebug. – ajtrichards Feb 20 '14 at 9:41
   
Not as of yet, I was hoping it would be a silly configuration setting. XDebug is a PITA to get installed, but that may be the only way of really figuring this out. – James Feb 20 '14 at 9:42
   
I would recommend going with NewRelic if you can - even if it's just for the free trial period. It really does give some good insights - installation is simple (on Linux). The data goes back in to the new relic dashboard within 30secs or so – ajtrichards Feb 20 '14 at 9:47
   
We've actually used them before but for a Node.js application, so our free trial has ended and we're not in a position to pay for it yet. But thanks! – James Feb 20 '14 at 9:50
   
If you signup again with this link: newrelic.com/aws you get standard for free :-) – ajtrichards Feb 20 '14 at 9:52

正确答案    见 /node-admin/9909
If I were you I would install the Chrome Clockwork extension plus the Laravel Clockwork package from Composer. Clockwork gives you a timeline where you can see what it is that takes so long, plus a database query tab where you can see how long time each query takes to load.

screenshot of clockwork

Happy hunting (:

 
   
This has really helped us! Although we've not really decreased the reported waiting time, I've managed to bring down the amount of queries we're executing (thanks eager-loading) and now we can log easily throughout the codebase. Thanks! – James Feb 21 '14 at 11:13

来自  https://stackoverflow.com/questions/21903612/decrease-waiting-time-in-a-laravel-application


 

Reducing TTFB

PUBLISHED 2 YEARS AGO BY RAPLIANDRAS

What Laravel-specific changes should I consider to reduce TTFB?

dbe

...but my TTFB is 10.8 seconds. I really want to reduce it.

 
Snapey
 Snapey
1 month ago(546,805 XP)

Thats waaaaayy beyond being a ttfb consideration

thats a fault in your application or design.

What are you doing on the page?

 

来自 https://laracasts.com/discuss/channels/tips/reducing-ttfb




 

 

Stop worrying about Time To First Byte (TTFB)

 by John Graham-Cumming.
 
 inShare38 
 

Time To First Byte is often used as a measure of how quickly a web server responds to a request and common web testing services report it. The faster it is the better the web server (in theory). But the theory isn't very good.

Wikipedia defines Time To First Byte as "the duration from the virtual user making an HTTP request to the first byte of the page being received by the browser." But what do popular web page testing sites actually report? To find out we created a test server that inserts delays into the HTTP response to find out what's really being measured. The answer was a big surprise and showed that TTFB isn't a helpful measure.

When a web browser requests a page from a web server it sends the request itself and some headers that specify things like the acceptable formats for the response. The server responds with a status line (which is typically HTTP/1.1 200 OK indicating that the page was available) followed by more headers (containing information about the page) and finally the content of the page.

CloudFlare's TTFB test server behaves a little differently. When it receives a request it sends the first letter of HTTP/1.1 200 OK (the H) and then waits for 10 seconds before sending the rest of the headers and page itself. (You can grab the code for the TTFB server here; it's written in Go).

If you ask WebPageTest to download a page from the CloudFlare TTFB server you get the following surprise. WebPageTest reported the Time To First Byte as the time the H was received (and not the time the page itself was actually sent). The 10 second wait makes this obvious.

Stop worrying about Time To First Byte  
(TTFB)

Exactly the same number is reported by gomez.

The TTFB being reported is not the time of the first data byte of the page, but the first byte of the HTTP response. These are very different things because the response headers can be generated very quickly, but it's the data that will affect the most important metric of all: how fast the user gets to see the page.

At CloudFlare we make extensive use of nginx and while investigating TTFB came across a significant difference in TTFB from nginx when compression is or is not used. Gzip compression of web pages greatly reduces the time it takes a web page to download, but the compression itself has a cost. That cost causes TTFB to be greater even though the complete download is quicker.

To illustrate that we took the largest Wikipedia page (List of Advanced Dungeons and Dragons 2nd Edition Monsters) and served it using nginx with and without gzip compression enabled. The table below shows the TTFB and total download time with compression on and off.

                          |  TTFB   |  Page loaded

--------------------------- | ------- | ------------- No compression (gzip off) | 213us | 43ms Compressed (gzip on) | 1.7ms | 8ms

Notice how with gzip compression on, the page was downloaded 5x faster, but the TTFB was 8x greater. That's because nginx waits until compression has started before sending the HTTP headers; when compression is turned off it sends the headers straight away. So if you look at TTFB it looks as if compression is a bad idea. But it you look at the download time you see the opposite.

From the end user perspective TTFB is almost useless. In this (real) example it's actually negatively correlated with the download time: the worse the TTFB the better the download time. Peering into the nginx source code we realized we could cheat and send the headers quickly so that it looked like our TTFB was fantastic even with compression, but ultimately we decided not to: that too would have negatively impacted the end user experience because we would have wasted a valuable packet right when TCP is going through slow start. It would have made CloudFlare look good in some tests, but actually hurt the end user.

Probably the only time TTFB is useful is as a trend. And it's best measured at the server itself so that network latency is eliminated. By examining a trend it's possible to spot whether there's a problem on the web server (such as it becoming overloaded).

Measuring TTFB remotely means you're also measuring the network latency at the same time which obscures the thing TTFB is actually measuring: how fast the web server is able to respond to a request.

At CloudFlare TTFB is not a significant metric. We're interested in optimizing the experience for end users and that means the real end-user page being visible time. We'll be rolling out tools specifically to monitor end-user experience so that all our publishers get to see and measure what their visitors are experiencing.

 
来自 https://blog.cloudflare.com/ttfb-time-to-first-byte-considered-meaningles/

What is Waiting (TTFB) in DevTools, and what to do about it

Written by Christophe Limpalair on 12/04/2015
 inShare11 

Have you ever opened Chrome's DevTools and wondered what everything meant?

I've had questions like: What should I be looking at? What should I try to optimize, and how?

Since I wanted answers, I set out on a mission to filter out the most important information. Then, I dug deeper to understand what needed to be fixed, and how to fix it. 

This article is a result of this mission. Let's take a look at what Waiting (TTFB) represents, and let's answer three important questions: Should it be optimized? When should it be optimized? How can one optimize it?

Does the TTFB (Time to First Byte) really matter?

Yes, it does.

So then the question becomes, should you focus your efforts on optimizing it for a faster website? It depends.

What does it depend on? How slow your TTFB is. 

There's a point when any further optimization won't give you as much of a return. Also, as Ilya says in his post, the timing isn't the only thing that matters—what's also important is what's in those first bytes. 

In the following example, the TTFB is pretty good. 160ms is a decent number, although it could be a bit better. (I'll explain how to make it better in a moment)



However, I go on sites with 1+ second TTFB for relatively simple pages and that, my friends, can be optimized. 



Check your timing and see, but remember that just because you don't have a long TTFB doesn't mean your visitors don't.

How come? That's what we're about to find out. First, let's explain what the TTFB actually represents, and that will explain why the number changes depending on a number of factors.

What is TTFB?

Time to First Byte is how long the browser has to wait before receiving data. In other words, it is the time waiting for the initial response. That's why it's pretty important...in order for your page to be rendered, it needs to receive the necessary HTML data. The longer it takes to get that data, the longer it takes to display your page.

The thing is, just because your TTFB is 130ms doesn't mean that someone in Canada or Brazil will have the same number. In fact, they won't.

Here's why.

The Waiting (TTFB) represents a number of things, like:
  1. Server response time
  2. Time to transfer bytes (a.k.a latency of the round trip)

Server response time: This is how long it takes for your server to generate a response and send it back. The good news, if this is the bottleneck, is that you have more control over it. Well, assuming you have access to the server. 

The bad news is that you have some work to do. You have to figure out which part of your system is taking time to respond, and that could be a number of things.

By the way, according to Google's PageSpeed Insights, your server response time should be below 200ms.

If your response time is higher than that (we'll talk about figuring out your timing in the next section), there are a number of possible reasons, like:
  • Slow database queries
  • Slow logic
  • Resource starvation
  • Too many slow frameworks/libraries/dependencies
  • Slow hardware

Time to transfer bytes: Internet can be pretty darn fast depending on where you are and what kind of connection you have access to, but even then we are limited by the distance packets have to travel. According to Ilya Grigorik, we're already quite close to the speed of light so we shouldn't expect more speed there unless we bend physics. The way to speed things up, then, is to shorten distances.

That's where CDNs come in, and that's how you can improve this part.

If your server responds very quickly, and the bytes have to travel a very short distance, your TTFB should be low. That doesn't mean a user won't run into a slow TTFB if they have a crappy connection or live far away from any of your CDN nodes. That's one reason why the CDN provider that you choose is important.

Is it my server, or is it the network?

This is an important question to answer, because it will make all the difference in how you go about optimizing your TTFB.

If your server is fast enough but the network is causing delays, optimizing your server is a waste of time and won't give you the results you're looking for.

So how can you tell?

Measure, measure, measure.

Thankfully, there are tools out there to help with this. 

Measuring network time

Option #1

One tool available is called the Navigation Timing API.



For example, using the fetchStart and responseStart will give you the round trip time for first byte.

See how to implement the API in the introduction.

Option #2

You may not know this, but Google Analytics actually includes a section that lists some of this information for you.

Go to Behavior -> Site Speed -> Overview, and you should be greeted with this:



Be careful of averages, though. They hide outliers that probably shouldn't be hidden.

Measuring server response time

While the Navigation Timing API has a requestStart option, which returns "the time immediately before the user agent starts requesting the current document from the server", it does not have a requestEnd due to a few reasons. That's too bad, though, because it means we will be getting a number that includes the trip back to the user's browser (since we'd have to use responseStart).

Hmm. Are there any other options?

There are definitely ways of profiling your data stores, web server response times, and other parts of your stack that data travels through in order to form a response. That's where monitoring toolscome in.

But what about timing the overall response time instead of individual pieces? Honestly, this one is tricky to find info on. Looks like some of the services mentioned in the previously linked post offer this kind of visibility.

If you know of an accurate way to do this, I'd love to hear about it.

How can I measure my TTFB on different networks?

Like I said earlier, just because your Waiting (TTFB) time is low doesn't mean all of your users will also have a low TTFB time.

How can you get a more accurate representation? By throttling your own network and seeing how long it takes. That's when the true colors of your optimization efforts come out. 

Yes, it may be super fast when everything is cached and the origin is close to you, but what about when your user is halfway across the world?

Alright, so how do you throttle your connection? Thankfully, DevTools makes this quite easy.

Click on "No Throttling" and select the option you want.





I chose the GPRS which is awfully slow, and still had a pretty good time of ~600ms. Not too bad considering how bad that network is.



Of course, it would be better if you actually had real people testing this on different kinds of networks, devices, and in different locations.

Conclusion

While DevTools can be confusing at first sight (because web performance is complicated), I hope this article gave you a better understanding of the Waiting (TTFB) results.

Another important thought to retain from reading this is that even if your website loads very quickly on your own devices and networks, it doesn't mean that the same holds for all networks and devices. Measure as much as you can. That's the only way you can paint a more accurate picture of what's going on and whether you are prematurely optimizing or not.

Happy learning!

Has this helped you get a better understanding of Waiting (TTFB)?

About Christophe

Current goals include finding ways of doing more in less time, creating a valuable tool for developers, and sharing my findings. ScaleYourCode is the product of these goals.

Get the latest posts