reset password
Author Message
G190852562
Posts: 162
Posted 14:11 Dec 06, 2014 |

I'm looking at the exercises we've done in class and would like an explanation for these questions:

1. Is it the case that for all functions f, f = O(f * log(f))? Why or why not?

2. Assume that f = O(g), does g = O(f + g)? Why or why not?

Thanks.

yxu22
Posts: 25
Posted 14:35 Dec 06, 2014 |

1. Yes.  

We want to know f = O( f * log(f) ),  

--> f <= c * f * log (f)

-->  c * log(f) >= 1

--> c >= 1/ log(f)

when n is large enough (n = infinite) , log(f) >= 1

--> 1/ log(f) <= 1

so there exists c = 1 or 2 ... ( c >= 1), that we can hold f <= c * f * log(f) 

so f = O( f * log(f) )

 

2. Yes

We want to know g = O(f+g)

--> g <= c * (f + g)

-- > c >= g / ( f + g )

Becuse f > 0, 

-- > f + g > g

-- > g / ( f+g ) < 1

There exists c = 1,2 ( c >= 1), that c >= g / ( f + g )

-- > g <= c * ( f + g)

--> g = O( f + g )

 

G190852562
Posts: 162
Posted 15:28 Dec 06, 2014 |
yxu22 wrote:

1. Yes.  

We want to know f = O( f * log(f) ),  

--> f <= c * f * log (f)

-->  c * log(f) >= 1

--> c >= 1/ log(f)

when n is large enough (n = infinite) , log(f) >= 1

--> 1/ log(f) <= 1

so there exists c = 1 or 2 ... ( c >= 1), that we can hold f <= c * f * log(f) 

so f = O( f * log(f) )

 

2. Yes

We want to know g = O(f+g)

--> g <= c * (f + g)

-- > c >= g / ( f + g )

Becuse f > 0, 

-- > f + g > g

-- > g / ( f+g ) < 1

There exists c = 1,2 ( c >= 1), that c >= g / ( f + g )

-- > g <= c * ( f + g)

--> g = O( f + g )

 

This explanation is a bit hard to follow along. I've tried reading it countless times with no avail. I feel that this is memorizing without understanding. Is there a way you can explain it in simpler terms via words?

Last edited by G190852562 at 15:28 Dec 06, 2014.
G190852562
Posts: 162
Posted 15:51 Dec 06, 2014 |

Am I understanding this correctly?

The running time (in its fullest detail) of an algorithm is represented by theta notation. The worst-case scenario is represented by big-oh notation such that given a function f, f is O(g) if there exists a constant c > 0 and n_0 > 0 where f <= c * g. The best-case scenario is represented by big-omega notation such that given a function f, f is Omega(g) if there exists a constant c > 0 and n_0 > 0 where f >= g/c.

I also want to know what is this n_0. From what I learned during group work, n_0 is the intersection between the graphs of functions f and g.

G190852562
Posts: 162
Posted 15:55 Dec 06, 2014 |

I feel like that definition is pure memorization. I'm wondering why function g is multiplied by a constant whereas f isn't. Is it because that's how it's stated formally?

G190852562
Posts: 162
Posted 16:14 Dec 06, 2014 |

I'm going to give it a try to make it easier to follow along (from my understanding of the explanation):

f = O(f * log(f))

*Note: to set up equations with O(n) remember => 

f = O(n) <==> f <= c * n, where c is a constant

Now, usually it is stated that f(n) > 1, so f == 2 would be a good substitution to use in this problem later:

1) f = O(f * log(f)) ==> f <= c * f * log(f) (divide the fs to make one)

2) (f/f) = c * log(f) ==> 1 <= c * log(f) (in the wikia it is usually said f(n) > 1, so substitute f to be 2)

3) 1 <= c * log(2) (log n is the amount of times n has to be divided by 2 because we are using log base 2 to make one, so log 2 == 1)

if you are confused about log base 2, use this calculator http://www.miniwebtool.com/log-base-2-calculator/, google, or this page http://www.purplemath.com/modules/solvelog2.htm

4) 1 <= c * log(2) ==> 1 <= c * 1 (we usually ignore constants so we'll take it out the next step)

5) 1 <= c * 1 ==> 1 <= 1 (which is true because 1 is equal to itself and so satisfies the condition)

6) Since we broke it down and found out it is true for numbers greater than 1 (e.g. 2, as the log base 2 of 1 == 0, see calculator),

f = O(f * log(f)) is true

Did this make more sense?

rabbott
Posts: 1649
Posted 16:39 Dec 06, 2014 |

The two original questions are fairly trivial intuitively. Perhaps you're making it more complex than it need be.

1. Is it the case that for all functions f, f = O(f * log(f))? Why or why not?

Yes. For our purposes we can assume that f >= 2. We are talking about runtimes and don't want to get distracted by functions with negative values. That makes no sense for runtimes. I picked 2 since  log2(2) = 1. In general in this sort of analysis we don't worry about small values. 

Given that assumption f < f * log (f). So let c = 1 and n0 = 0.  That shows that f = O(f * log(f)). To be more careful you might worry about cases in which f is not always at least 2. But for the purposes of this course, we aren't concerned about those details. Intuitively, if you have two functions f1 and f2 and f1 is always less than f2, then f1=O(f2). That should be intuitively obvious. If it isn't I'm not sure what to say to make it obvious.

2. Assume that f = O(g), does g = O(f + g)? Why or why not?

Yes again. if f = O(g) then since f + g > g, f = O(f+g). Whatever c and n0 shows that f = O(g) shows that f = O(f+g). Intuitively again, this should be obvious. Given any two non-negative functions f and g, each is O(f+g) since each is less than f+g. This is the same basic argument as above.

Last edited by rabbott at 16:39 Dec 06, 2014.
G190852562
Posts: 162
Posted 16:44 Dec 06, 2014 |
AnonCS312 wrote:

I'm going to give it a try to make it easier to follow along (from my understanding of the explanation):

f = O(f * log(f))

*Note: to set up equations with O(n) remember => 

f = O(n) <==> f <= c * n, where c is a constant

Now, usually it is stated that f(n) > 1, so f == 2 would be a good substitution to use in this problem later:

1) f = O(f * log(f)) ==> f <= c * f * log(f) (divide the fs to make one)

2) (f/f) = c * log(f) ==> 1 <= c * log(f) (in the wikia it is usually said f(n) > 1, so substitute f to be 2)

3) 1 <= c * log(2) (log n is the amount of times n has to be divided by 2 because we are using log base 2 to make one, so log 2 == 1)

if you are confused about log base 2, use this calculator http://www.miniwebtool.com/log-base-2-calculator/, google, or this page http://www.purplemath.com/modules/solvelog2.htm

4) 1 <= c * log(2) ==> 1 <= c * 1 (we usually ignore constants so we'll take it out the next step)

5) 1 <= c * 1 ==> 1 <= 1 (which is true because 1 is equal to itself and so satisfies the condition)

6) Since we broke it down and found out it is true for numbers greater than 1 (e.g. 2, as the log base 2 of 1 == 0, see calculator),

f = O(f * log(f)) is true

Did this make more sense?

Since you say that it's said that f(n) > 1, then that means that this question is true:

Do the assumptions that all functions are greater than or equal to 1 and that they never decrease make sense for functions that are supposed to represent the runtime of algorithms? Why or why not?

This is because there is always a runtime for every algorithm. There is no such thing as negative runtime. Correct?

Also, my guess is that you chose 2 to replace f due to the log_2(n) fact. It's for simplicity's sake. For instance, if it was log_3(n), you would've selected 3 to replace f.

I posted questions on each of my posts. I'm not too sure if they were all noticed. Perhaps I should bold my questions?

G190852562
Posts: 162
Posted 16:48 Dec 06, 2014 |

Seems like Prof. Abbott answered my questions before I submitted the previous post.

Last edited by G190852562 at 16:48 Dec 06, 2014.
G190852562
Posts: 162
Posted 17:01 Dec 06, 2014 |

By the way:

I also want to know what is this n_0. From what I learned during group work, n_0 is the intersection between the graphs of functions f and g.

Also:

Just to double check my understanding, does function g derive from f? Or are they both independent functions? I feel that it is the former.

And to reiterate:

The running time (in its fullest detail) of an algorithm is represented by theta notation. The worst-case scenario is represented by big-oh notation such that given a function f, f is O(g) if there exists a constant c > 0 and n_0 > 0 where f <= c * g. The best-case scenario is represented by big-omega notation such that given a function f, f is Omega(g) if there exists a constant c > 0 and n_0 > 0 where f >= g/c.

Last edited by G190852562 at 17:04 Dec 06, 2014.
G190852562
Posts: 162
Posted 18:05 Dec 06, 2014 |

I have a lot of questions to ask though it may seem like it's a bit too late. However, I don't want to give up. I do have familiarity with most/all the content (as I watched all the videos and did every single quiz).

I wish that someone can instantly answer my questions.

Last edited by G190852562 at 18:05 Dec 06, 2014.
rabbott
Posts: 1649
Posted 19:37 Dec 06, 2014 |
AnonCS312 wrote:

By the way:

I also want to know what is this n_0. From what I learned during group work, n_0 is the intersection between the graphs of functions f and g.

n0 is not an intersection. n0 is a value of n, the input to the function. It represents the value so that any greater value satisfies the condition with c. That sounds very abstract, but n0 is simply a point along the x axis that you want to be greater than. If you ask what is the n0 such that x2 >= 25 the answer would be n0 = 5.

Also:

Just to double check my understanding, does function g derive from f? Or are they both independent functions? I feel that it is the former. 

The functions f and g are independently defined. 

And to reiterate:

The running time (in its fullest detail) of an algorithm is represented by theta notation. The worst-case scenario is represented by big-oh notation such that given a function f, f is O(g) if there exists a constant c > 0 and n_0 > 0 where f <= c * g. The best-case scenario is represented by big-omega notation such that given a function f, f is Omega(g) if there exists a constant c > 0 and n_0 > 0 where f >= g/c.

Think of big-Oh as an upper bound and big-Omega as a lower bound. if f = O(g) that means intuitively that f(n) < g(n). It's not quite that simple since it really means that there is some constant c for which f(n) < c*g(n). And it's not quite that simple since f(n) < c*g(n) only when n > n0. In other words if n is big enough, f(n) < c*g(n).

 

Last edited by rabbott at 19:38 Dec 06, 2014.
rabbott
Posts: 1649
Posted 19:39 Dec 06, 2014 |
AnonCS312 wrote:

I have a lot of questions to ask though it may seem like it's a bit too late. However, I don't want to give up. I do have familiarity with most/all the content (as I watched all the videos and did every single quiz).

I wish that someone can instantly answer my questions.

You're right, it's late to start asking questions. But if you ask them one at a time, some of them will get answered. To make it simpler, ask each one on a separate thread.

Last edited by rabbott at 19:39 Dec 06, 2014.