Wednesday, March 30, 2016

Convert String to ZigZag Bottom Up

This updated information has been further expanded upon on my new website. You can find the updated details here: https://k5kc.com/cs/algorithms/zigzag-conversion-problem/.


Problem

The string "PAYPALISHIRING" is written in a zigzag pattern on a given number of rows like this: (you may want to display this pattern in a fixed font for better legibility)

Example


P   A   H   N
A P L S I I G
Y   I   R
And then read line by line: "PAHNAPLSIIGYIR"
Write the code that will take a string and make this conversion given a number of rows:
string convert(string text, int nRows);
convert("PAYPALISHIRING", 3) should return "PAHNAPLSIIGYIR".


Solution

Method 1 - Use list to save the substrings


A straightforward solution is to actually build up the zigzag character matrix and read the matrix into a string.
Here is the basic algo:
  • Maintain down boolean which tells whether you are going up or down in zig zag order.
  • As soon as rows = 0 means you need to go down now and as soon as rows = nRows - 1 you need to go up now
  • Just keep on doing this until whole string is exhausted
  • Using StringBuilder list/array to store elements in zig zag order and in the end just append all characters from each StringBuilder to return output.
Here is the code in java:
import java.util.ArrayList;

public class Solution {
    public String convert(String s, int nRows) {
        if(s == null || s.length() <= 1 || nRows <= 1){
            return s;
        }
        
        ArrayList<StringBuilder> list = new ArrayList<StringBuilder>();
        for(int i = 0; i < nRows; i++){
            list.add(new StringBuilder());
        }
        
        int i = 0;
        boolean down = false;
        int row = 0;
        
        while(i < s.length()){
            list.get(row).append(s.charAt(i));
            if(row == 0){
                down = true;
            }else if(row == nRows - 1){
                down = false;
            }
            
            if(down){
                row++;
            }else{
                row--;
            }
            
            i++;
        }
        
        i = 0;
        StringBuilder output = new StringBuilder();
        while( i < list.size()){
            output.append(list.get(i).toString());
            i++;
        }
        return output.toString();
    }
}


Though time complexity is good O(n), but the space is complexity is O(nRows*Max_column_in_a_row) = O(n)

Method 2 - Append the character on the fly

Another way is to calculate the corresponding index on the fly and append the character into a string.
Take convert("PAYPALISHIRING", 4) as an example. The matrix will look as follows :
P   I   N
A L S I G
Y A H R
P   I

which can be expended as below (You will see why we rewrite like this soon.).
P   I   N
A   S   G
Y   H
P   I
A   R
L   I

Some useful properties:
  • For any full columns, the odd ones have nRows characters, even ones have (nRows - 2) characters.
  • For the first and last rows, we read one single character from each expended column;
  • For the rest of rows, we read two characters, one from top part and one from bottom part, from each expended column.
  • One edge case is that nRows = 1, where (nRows*2 - 2) becomes 0. In this case, we should simply return the original string.
public String convert(String s, int nRows) {  
  if (nRows == 1) return s;  
  
  StringBuilder ss = new StringBuilder();  
  int n = nRows + nRows - 2;  
  // rest rows  
  for (int i = 0; i < nRows; ++i) {  
    int cur = i;  
    while (cur < s.length()) {  
      ss.append(s.charAt(cur));  
      cur += n;  
      if (i > 0 && i < nRows - 1 && (cur - i - i) < s.length()) {  
        ss.append(s.charAt(cur - i - i));  
      }  
    }  
  }  
  return ss.toString();  
}  


Thanks.
Reference
http://www.params.me/2014/04/leetcode-zigzag-conversion-in-java.html
http://n00tc0d3r.blogspot.in/2013/06/zigzag-conversion.html
https://leetcode.com/problems/zigzag-conversion/

Friday, March 11, 2016

Growth Rate - The Importance of Asymptotics

It really is true that if algorithm A is o(algorithm B) then for large problems A will take much less time than B.

Definition: If (the number of operations in) algorithm A is o(algorithm B), we call A asymptotically faster than B.

Example:: The following sequence of functions are ordered by growth rate, i.e., each function is little-oh of the subsequent function.
log(log(n)), log(n), (log(n))2, n1/3, n1/2, n, nlog(n), n2/(log(n)), n2, n3, 2n, 4n, n!

One way to compare 2 functions, is using L'Hospital rule. Here is good explanation:
Growth Rates of Functions and L'Hospital's Rule

Here is the snapshot of how the growth looks in increasing order
1 loglogn logn n nlogn n2 2n n! nn

Reference
https://cs.nyu.edu/courses/fall02/V22.0310-002/class-notes.html

http://www.cs.odu.edu/~cs381/cs381content/function/growth.html


Asymptotic Notation

Its been a while, since I have blogged. To start with I am posting a basics of asymptotic analysis of algorithm, which we study at the start of the course.
Now we are going to be less precise and worry only about approximate answers for large inputs.

The Big-Oh Notation

Definition: Let f(n) and g(n) be real-valued functions of a single non-negative integer argument. We write f(n) is O(g(n)) if there is a positive real number c and a positive integer n0 such that f(n)≤cg(n) for all n≥n0.
What does this mean?
For large inputs (n≤n0), f is not much bigger than g (f(n)≤cg(n)).
Examples to do on the board
  1. 3n-6 is O(n). So, what f(n) = 3n-6 <= c*n. For example, c=4. Some less common ways of saying the same thing follow.
  2. 3x-6 is O(x).
  3. If f(y)=3y-6 and id(y)=y, then f(y) is O(id(y)).
  4. 3n-6 is O(2n)
  5. 9n4+12n2+1234 is O(n4).
  6. innerProduct is O(n)
  7. innerProductBetter is O(n)
  8. innerProductFixed is O(n)
  9. countPositives is O(n)
  10. n+log(n) is O(n).
  11. log(n)+5log(log(n)) is O(log(n)).
  12. 1234554321 is O(1).
  13. 3/n is O(1). True but not the best.
  14. 3/n is O(1/n). Much better.
  15. innerProduct is O(100n+log(n)+34.5). True, but awful.
A few theorems give us rules that make calculating big-Oh easier.

Theorem (arithmetic): Let d(n), e(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and e(n) is O(g(n)). Then
  1. ad(n) is O(f(n)) for any nonnegative a
  2. d(n)+e(n) is O(f(n)+g(n))
  3. d(n)e(n) is O(f(n)g(n))
Theorem (transitivity): Let d(n), f(n), and g(n) be nonnegative real-valued functions of a nonnegative integer argument and assume d(n) is O(f(n)) and f(n) is O(g(n)). Then d(n) is O(g(n)).
Theorem (special functions): (Only n varies)
  1. If f(n) is a polynomial of degree d, then f(n) is O(nd).
  2. nx is O(an) for any x>0 and a>1.
  3. log(nx) is O(log(n)) for any x>0
  4. (log(n))x is O(ny) for any x>0 and y>0.

Definitions: (Common names)
  1. If a function is O(log(n)) we call it logarithmic.
  2. If a function is O(n) we call it linear.
  3. If a function is O(n2) we call it quadratic.
  4. If a function is O(nk) with k≥1, we call it polynomial.
  5. If a function is O(an) with a>1, we call it exponential.
Remark: The last definitions would be better with a relative of big-Oh, namely big-Theta, since, for example 3log(n) is O(n2), but we do not call 3log(n) quadratic.

1.2.2 Relatives of the Big-Oh

Big-Omega and Big-Theta

Recall that f(n) is O(g(n)) if for large n, f is not much bigger than g. That is g is some sort of upper bound on f. How about a definition for the case when g is (in the same sense) a lower bound for f?
Definition: Let f(n) and g(n) be real valued functions of an integer value. Then f(n) is Ω(g(n)) if g(n) is O(f(n)).
Remarks:
  1. We pronounce f(n) is Ω(g(n)) as "f(n) is big-Omega of g(n)".
  2. What the last definition says is that we say f(n) is not much smaller than g(n) if g(n) is not much bigger than f(n), which sounds reasonable to me.
  3. What if f(n) and g(n) are about equal, i.e., neither is much bigger than the other?
Definition: We write f(n) is Θ(g(n)) if both f(n) is O(g(n)) and f(n) is Ω(g(n)).
Remarks We pronounce f(n) is Θ(g(n)) as "f(n) is big-Theta of g(n)"
Examples to do on the board.
  1. 2x2+3x is θ(x2).
  2. 2x3+3x is not θ(x2).
  3. 2x3+3x is Ω(x2).
  4. innerProductRecutsive is Θ(n).
  5. binarySearch is Θ(log(n)). Unofficial for now.
  6. If f(n) is Θ(g(n)), the f(n) is &Omega(g(n)).
  7. If f(n) is Θ(g(n)), then f(n) is O(g(n)).

Little-Oh and Little-Omega

Recall that big-Oh captures the idea that for large n, f(n) is not much bigger than g(n). Now we want to capture the idea that, for large n, f(n) is tiny compared to g(n).
If you remember limits from calculus, what we want is that f(n)/g(n)→0 as n→∞. However, the definition we give does not use limits (it essentially has the definition of a limit built in).

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is o(g(n)) if for any c>0, there is an n0 such that f(n)≤g(n) for all n>n0. This is pronounced as "f(n) is little-oh of g(n)".

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is ω(g(n) if g(n) is o(f(n)). This is pronounced as "f(n) is little-omega of g(n)".
Examples: log(n) is o(n) and x2 is ω(nlog(n)).


Properties of Notations


Notations
Reflexive
Transitive
Symmetric
Transpose Symmetric
O
Yes
Yes
No
No
Ω
Yes
Yes
No
Yes
 
Yes i.e. f(n) = O(g(n)), iff    g(n) = Ω(f(n))
Θ
Yes
i.e.f(n) =
Θ (f(n))

Yes
f(n)=Θ(g(n)) and g(n)=Θ(h(n))
 implies f(n) =
Θ(h(n))
Yes
No
o
No
Yes
No
No
ω
No
Yes
No
No



Source
https://cs.nyu.edu/courses/fall02/V22.0310-002/class-notes.html