• Listen to a special audio message from Bill Roper to the Hive Workshop community (Bill is a former Vice President of Blizzard Entertainment, Producer, Designer, Musician, Voice Actor) 🔗Click here to hear his message!
  • Read Evilhog's interview with Gregory Alper, the original composer of the music for WarCraft: Orcs & Humans 🔗Click here to read the full interview.

Haskell

Status
Not open for further replies.

fladdermasken

Off-Topic Moderator
Level 39
Joined
Dec 27, 2006
Messages
3,690
generates the most beautiful code.

Code:
    import Data.List
     
    list1 = [1,2,3,4,5,6]
     
    -- Problem 1: find the last element of a list
    lastList :: [a] -> a
    lastList [] = error "list cannot be empty"
    lastList (x:[]) = x
    lastList (x:xs) = lastList xs
     
    -- Problem 2: Find the last but one element of a list.
    lastButOne :: [a] -> a
    lastButOne [] = error "list cannot be empty"
    lastButOne (x:[]) = error "list must be larger than one"
    lastButOne [x,_] = x
    lastButOne (_:xs) = lastButOne xs
     
     
    -- Problem 3: Find the K'th element of a list. The first element in the list is number 1.
    elementAt :: [a] -> Int -> a
    elementAt [] n = error "list cannot be empty"
    elementAt s@(x:xs) n
    | n > (length s) = error "n cannot be larger than list"
    | otherwise = if (length s) == n then x else elementAt xs n
     
    -- Problem 4: find the number of elements of a list
    countList :: [a] -> Int
    countList [] = 0
    countList (x:xs) = 1 + countList xs
     
    -- Problem 5: reverse a list (I assume, not using "reverse")
    reverser :: [a] -> [a]
    reverser [] = []
    reverser (x:xs) = reverser xs ++ (x : [])
     
    -- Problem 6: find out whether a list is a palindrome
    isPalindrome :: (Eq a) => [a] -> Bool
    isPalindrome [] = True
    isPalindrome xs = let xs2 = reverse xs
    in (xs2 == xs)
     
    -- Problem 7: flatten a nested list structure
    -- NOTE: This doesn't strictly solve the problem. May want to revisit later.
    flatten :: [[a]] -> [a]
    flatten = foldr (++) []
     
    {-
      Problem 8: Eliminate consecutive duplicates of list elements.
      If a list contains repeated elements they should be replaced with
      a single copy of the element. The order of the elements should not be changed.
     
      > compress ["a","a","a","a","b","c","c","a","a","d","e","e","e","e"]
      ["a","b","c","a","d","e"]
     
    -}
     
    -- excellent opportunity to play around with mapAccumL
    compress :: (Eq x) => [x] -> [x]
    compress xs = fst $ mapAccumL (\ys s -> if (length ys > 0) && (last ys == s) then (ys, s) else (ys++s:[], s)) [] xs
     
    -- the simplest solution from the Wiki
    compressr :: Eq a => [a] -> [a]
    compressr = map head . group
     
    {-
      Problem 9: Pack consecutive duplicates of list elements into sublists.
      If a list contains repeated elements they should be placed in separate sublists.
     
      *Main> pack ['a', 'a', 'a', 'a', 'b', 'c', 'c', 'a', 'a', 'd', 'e', 'e', 'e', 'e']
      ["aaaa","b","cc","aa","d","eeee"]
    -}
     
    pack :: (Eq a) => [a] -> [[a]]
    pack = group

Share your Haskell love stories.
 
Last edited:

fladdermasken

Off-Topic Moderator
Level 39
Joined
Dec 27, 2006
Messages
3,690
Haven't done any SML but I think HINDYhat recommended it to me. Agda is also sexy, but I have only used it for proof assistant.

We had a guest lecture on Scala pretty recently. Statically typed object-functional language that embraces everything beautiful about the functional paradigm and cleans up what sucked about java. Also introduced a whole bunch of promising monads (e.g. Futures). Not gonna lie, it was pretty beautiful.

Anyways, no Haskell drones lurking here?

Come on people is everyone really infected by C++?
 
Last edited by a moderator:
Level 8
Joined
Aug 13, 2009
Messages
466
The first programming course at my university's CS program used SML/NJ, a functional language which should be fairly similar to Haskell :p
That said, it was focused a decent bit on data structures and code complexity,
and most people would probably have coded previously anyway.
 

fladdermasken

Off-Topic Moderator
Level 39
Joined
Dec 27, 2006
Messages
3,690
and most people would probably have coded previously anyway.
My uni got us started on Haskell straight away. It was actually a pedagogical move aimed at people who had been introduced (probably informally) to programming beforehand. So if you had done something like Java, C# or C++, you would still feel pretty reset in Haskell.
 
Yes~

Code:
    -- My first function
    sumList :: [a] -> a
    sumList [] = 0
    sumList (x:xs) = x + sumList xs

    -- Recursive Fibonacci function
    fibonacci :: Int -> Int
    fibonacci 0 = 1
    fibonacci 1 = 1
    fibonacci n = fibonacci (n - 1) + fibonacci (n - 2)

    -- Trivial Even/Odd functions
    isOdd :: Int -> Bool
    isOdd n = (mod n 2) == 1
    isEven :: Int -> Bool
    isEven n = not isOdd n

    -- Greatest pair function
    greatestPair :: [(a,b)] -> (a,b)
    greatestPair [a] = a
    greatestPair [a,b] = if fst a + snd a > fst b + snd b then a else b
    greatestPair (x:xs) = greatestPair x (greatestPair xs)

    -- Flad's countList function (only here for the average function)
    countList :: [a] -> Int
    countList [] = 0
    countList (x:xs) = 1 + countList xs

    {-
        Average function
        Only to be used on numerical data
    -}
    average :: [a] -> a
    average [] = 0
    average xs = (sumList xs) / (countList xs)

I'm still getting started. Hopefully, I'll get my head around the more complex syntactical features soon.
 
Last edited by a moderator:
Level 8
Joined
Aug 13, 2009
Messages
466
Recursive fibonacci sucks though.

In a potentially amusing anecdote, long ago I looked at at recursive fib and due to how I thought, I just did not get how it worked. Personally I'd constructed a version with tail recursion (aka loops more or less) for our assignment of the week because it just seemed more logical to me to do it that way - my mind thinks imperatively I guess.

Some time after, I realized that it was basically just using the fibonacci series' recursive definition. Wanted to selfharm at the stupidity of myself.
 

fladdermasken

Off-Topic Moderator
Level 39
Joined
Dec 27, 2006
Messages
3,690
I know.
Good thing no one really uses these in real applications anyway /o/
http://www.haskell.org/haskellwiki/Haskell_in_industry
http://www.haskell.org/haskellwiki/Applications

The functional paradigm is actually on the rise. One strong pull right now is concurrency. Most OOP languages can't utilize multiple processors at once because of interdependencies between data. Functions in pure functional languages don't alter outside state information, so you can actually execute many functions at the same time.
 

Ralle

Owner
Level 79
Joined
Oct 6, 2004
Messages
10,183
Interesting discussion. I have only written a few lines of Haskell in my life, but I have a friend who swears by it. It is his favorite language. So far he has written a web server, a prototype text editor, various compilers and other crazy stuff in it. I like the type systems and pattern matching stuff of these functional languages, but I am not really going to use them for anything serious.
I have used SML in three courses in uni. I like the language, but honestly, they should switch to teaching people Haskell, it's much more popular and actually used by people.

I took the liberty to wrap your Haskell in [code=haskell][/code] tags.
 
Oh I also got some coide to share:
[sry for reviving, but I LOVE Haskell]

Want to see the beuty of haskell?

Here's a full library for huffman trees. I translated it from spanish so it may still have some things. It shows how short code can be with just high order functions and lazyness:
Code:
module Huffman
( 
   Huffman,
   createHuffman,
   generateAsociations,
   rebuildHuffman,
   encode,
   decode
)
where
   
   import LeftistHeap
   import Data.List
   import Utilities
   
   data Huffman a = Leaf Integer a | Stem Integer (Huffman a) (Huffman a)
      deriving (Show)
      
   instance Eq (Huffman a) where
      (==) (Leaf i _) (Leaf j _) = (i == j)
      (==) (Leaf i _) (Stem j _ _ ) = (i == j)
      (==) (Stem i _ _ ) (Leaf j _) = (i == j)
      (==) (Stem i _ _ ) (Stem j _ _ ) = (i == j)
      
   instance Ord (Huffman a) where
      compare (Leaf i _) (Leaf j _)
         | i == j    =  EQ
         | i <= j    =  LT
         | otherwise =  GT
      compare (Leaf i _) (Stem j _ _ )
         | i == j    =  EQ
         | i <= j    =  LT
         | otherwise =  GT
      compare (Stem i _ _ ) (Leaf j _)
         | i == j    =  EQ
         | i <= j    =  LT
         | otherwise =  GT
      compare (Stem i _ _ ) (Stem j _ _ )
         | i == j    =  EQ
         | i <= j    =  LT
         | otherwise =  GT
   
   {-
    - | this function 'getFrequency' takes a 'Huffman' tree and returns
    -   the frequency of the tree's top
    -}
   getFrequency :: Huffman a -> Integer
   getFrequency (Leaf i _) = i
   getFrequency (Stem i _ _) = i
   
   {-
    - | the function 'createStem' creates an Stem from two 'Huffman' trees
    -}
   createStem :: Huffman a -> Huffman a -> Huffman a
   createStem a b = Stem ((getFrequency a) + (getFrequency b)) a b
   
   {-
    - | the function 'createHuffman' takes a list of tuples with elements and
    - their frequencies and returns a 'Huffman' tree
    -}
   createHuffman :: [(a, Integer)] -> Huffman a
   createHuffman l = (fst.extractMinimun) $ until simple gather (heap l)
      where
         heap x = foldl (\z (e,i) -> insert z (Leaf i e)) (crearLeftistHeap) x
         gather x = insert newHeap (createStem elem1 elem2)
            where
               (elem1,subHeap)  = extractMinimun x
               (elem2,newHeap)   = extractMinimun subHeap
   
   {-
    - | the function 'generateAsociations' takes a 'Huffman' tree and returns
    - a list of tuples with elements and codification.
    -}
   generateAsociations :: Huffman a -> [(a,[Bool])]
   generateAsociations (Leaf _ b) = [(b,[True])]
   generateAsociations a = genAsocAux a []
      where
         genAsocAux (Leaf _ c) b = [(c,(b ++ [True]))]
         genAsocAux (Stem _ l r) b =   (genAsocAux l (b ++ [False])) ++ 
                                       (genAsocAux r (b ++ [True]))
   
   {-
    - | the function 'rebuildHuffman' is the inverse operation to
    - 'generateAsociations', it takes a list of tuples with elements and 
    - their codification, and generates the related 'Huffman' tree.
    -}
   rebuildHuffman :: [(a,[Bool])] -> Huffman a
   rebuildHuffman [x] = Leaf 0 (fst x)
   rebuildHuffman l = 
      let g = ((filter (not.head.snd) l),(filter (head.snd) l)) in 
      Stem 0 (rebuildHuffman ((h.fst) g)) (rebuildHuffman ((h.snd) g))
      where
         h = map (\(x,y) -> (x,tail y))
   
   {-
    - | the function 'encode' takes a list of symbols and generates
    - an encoded version.
    -}
   encode :: Eq a => [a] -> (Huffman a, [Bool])
   encode l = (huff, cod)
      where
         huff = createHuffman (encode l)
         asoc = (generateAsociations huff)
         cod = foldl (\z y -> (z ++ (snd.unjust) (find (((==)y).fst) asoc))) [] l
   
   {-
    - | the function 'decode' takes a 'Huffman' tree, a list of encoded symbols
    - and generates a list of decoded symbols.
    - Inverse function to 'encode'.
    -}
   decode :: Huffman a -> [Bool] -> [a]
   decode huff cod = parseCod cod huff []
      where
         parseCod [] (Leaf _ a) acc = (acc ++ [a])
         parseCod [] _ acc = acc
         parseCod (_:c) (Leaf _ a) acc = parseCod c huff (acc ++ [a])
         parseCod (True:c) (Stem _ _ r2) acc = parseCod c r2 acc
         parseCod (False:c) (Stem _ r1 _) acc = parseCod c r1 acc

those are 119 lines. Without comments and spaces, it's about 45 lines. In c++, it would take me around 200 lines of just code. The library is not really optimized at it's best, but I've been able to compress big files in less than 5 secs.

Here's a list of common mathematical functions:
Code:
import Data.Foldable (foldl) --should be fold' but highlight messes up
-- slicet takes a list l and a number n, and divides l in n parts.
slice l n = map (take n) $ takeWhile (/=[]) $ iterate (drop n) l

gcd a b = fst $ until (\(_,y) -> y == 0) (\(x,y) -> (y,mod x y)) (a,b)

squareRoot n = fst $ until (\(x,y) -> x == y)
    (\(x,y) -> let bound = div (x+y+1) 2 in
        if bound*bound > n then (x,bound-1) else (bound,y)) (0,n)
--replicate n e = take n $ repeat e
exponential e n = product $ replicate n e

logb b n = length $ takeWhile (>=b) $ iterate (flip div b) n

factorial n = product [2..n]

sumDigits n = foldl (\z y -> z + mod y 10) 0 $ takeWhile (>0) $ 
    iterate (flip div 10) n
    
multiples n = filter (\x -> mod n x == 0) [1..n]
        
primeNumbers = filter (\x -> all (\y -> mod x y /= 0) [2..(x-1)]) $ 
    2 : map ((+1).(*2)) [1..]
    
getElem n l = last $ take n l
{-
    'trianguloPascal' genera una lista inifinita con los triangulos de pascal
    con potencias de 0 a infinito. Como funciona:
        en [1] se agregan 0s a la derecha e izquierda y se tienen 2 listas:
        l1: 01 y l2: 10, luego se hace un zip con suma: [0 + 1, 1 + 0] lo que da
        11, el siguiente triangulo de pascal (en este caso 1).
-}
trianguloPascal = iterate (\x -> zipWith (+) (0:x) (x ++ [0])) [1]

-- forma sencilla
-- fibs = iterate (\(x,y) -> (y,x+y)) (0,1)
fibs = 1 : scanl (+) 1 fibs

fibonacci 0 = 0
fibonacci n = last $ take n fibs

I want to show another beuty of Haskell: Monads!
This is just the interpretation function of a program once it has been parsed and the symboltable has been generated:
Code:
    {-
    - | the function 'interpret' takes a syntthetic tree and interprates it
    -   as a valid program until it finishes.
    -}
    interpret :: Program -> StateT SymbolTable IO ()
    interpret (Program i) = interpInstlist i
        where
            -- Monadic
            -- Instruction List
            interpInstlist (InstructionList l i) = do 
                interpInstlist l
                interpInst i
            interpInstlist (Empty) = return ()
            -- Instruction: Using
            interpInst (SUsing dl il) = do
                st <- get
                put $ addLevel st
                interpInstlist il
                st <- get
                put $ removeLevel st
            -- Instruction: Begin End
            interpInst (SBegin il) = interpInstlist il
            -- Instruction: Assign
            interpInst (SAssign id ex) = do
                st <- get
                case (getIdentType id st) of
                    Boolean -> put $ saveVar st id $! bool2Var $! evalBool ex st
                    Integer -> put $ saveVar st id $! int2Var $! evalInt ex st
                    Canvas -> put $ saveVar st id $! canvas2Var $! evalCanvas ex st
            -- Instruction: If Then Else
            interpInst (SIfElse eb il1 il2) = do
                st <- get
                if ((($!) evalBool) eb st) then
                    interpInstlist il1
                else
                    interpInstlist il2
            -- Instruction: If Then
            interpInst (SIfThen eb il1) = do
                st <- get
                if ((($!) evalBool) eb st) then
                    interpInstlist il1
                else
                    return ()
            -- Instruction: With
            interpInst (SWith id ea1 ea2 il) = do
                st <- get
                let start = (($!)evalInt) ea1 st
                let end = (($!)evalInt) ea2 st
                put $ saveVar (addLevel st) id $! int2Var start
                doWith id start end il
                st <- get
                put $ removeLevel st
                where
                    doWith id start end il
                        | (start <= end) = do
                            let start2 = start + 1
                            st <- get
                            put $ saveVar (resetLevel st) id $! int2Var start2
                            interpInstlist il
                            doWith id start2 end il
                        | otherwise = return ()
            -- Instrucción: From
            interpInst (SFrom ea1 ea2 il) = do
                st <- get
                put $ addLevel st
                doFrom (max (((($!)evalInt) ea2 st) - ((($!)evalInt) ea1 st) + 1) 0) il
                st <- get
                put $ removeLevel st
                where
                    doFrom 0 _ = return ()
                    doFrom iter il = do
                        st <- get
                        put $ resetLevel st
                        interpInstlist il
                        doFrom (iter-1) il
            -- Instruction: While
            interpInst wh@(SWhile eb1 il) = do
                st <- get
                put $ addLevel st
                whileDo eb1 il
                st <- get
                put $ removeLevel st
                where
                    whileDo eb1 il = do
                        st <- get
                        if (evalBool eb1 st) then do
                            put $ resetLevel st
                            interpInstlist il
                            interpInst wh
                        else return ()
            -- Instruction: Print
            interpInst (SPrint el) = do
                st <- get
                lift $! printCanvas $! evalCanvas el st
            -- Instruction: Read
            interpInst (SRead id) = do
                st <- get
                case (getIdentType id st) of
                    Boolean -> do
                        lift $! putStr $
                            "Introduzca un booleano para identificador '" ++ id ++ 
                            "' : "
                        lift $! hFlush stdout
                        b <- lift $! getLine
                        put $ saveVar st id $! bool2Var $! string2Boolean b
                    Integer -> do
                        lift $! putStr $
                            "Introduzca un entero para identificador '" ++ id ++ 
                            "' : "
                        lift $! hFlush stdout
                        i <- lift $! getLine
                        put $ saveVar st id $! int2Var $! string2Int i
                    _ -> return()
                where
                    string2Boolean :: String -> Bool
                    string2Boolean b
                        | (map toLower b) == "true" = True
                        | otherwise = False
                    string2Int :: String -> Int
                    string2Int s = let f = reads s :: [(Int,String)] in auxString2Int f
                        where
                        auxString2Int :: [(Int,String)] -> Int
                        auxString2Int [] = 0
                        auxString2Int l = (fst.head) l

It's a basic interpreter for a language designed to generate ASCII Art. The language handled a lot of shit, though. State and IO Monads are used simultaniusly.

Recursive fibonacci sucks though.

depends if the recursion is tail recursion or not.. If it's tail no problem because it will just work like a simple jump in assembly. If it's not then the compiler will generate code for pushing the stack and calling the function again.

Indeed. That looks horrible. The syntax thinks that ' means string? How prejudiced.

that's actualy a weird thing in haskell. ' can mean 2 things. If it's at the end of a function name is a way to tell the function is strict (does eager evaluation instead of lazy evaluation). It can also be used for making strings which have the symbol " inside. I prefer to use c's notation of \" and avoid using instead ' '. It's one of the pointless feutures it has, sadly.
 
Last edited:
Level 29
Joined
Jul 29, 2007
Messages
5,174
I don't understand how this is any "less restrictive" than imperative languages.
As far as I can see, without ever writing in Haskel or any other functional language, you are writing many ifs without writing ifs, which just makes it obscure and unreadable.
I could write the same recursive functions with clearer syntax and probably no more (and if so, not a lot more) lines in any interpreted language.

On an unrelated note, my highlighter could highlight it properly.
 
Last edited:
I don't understand how this is any "less restrictive" than imperative languages.
As far as I can see, without ever writing in Haskel or any other functional language, you are writing many ifs without writing ifs, which just makes it obscure and unreadable.
I could write the same recursive functions with clearer syntax and probably no more (and if so, not a lot more) lines in any interpreted language.

On an unrelated note, my highlighter could highlight it properly.

the recursion is the least. You normaly avoid it by doing function currying or high order functions. The compiler is smart enough to optimize the behavior, thus giving you good speed for simple code.

Recursion should only be used when you don't have any other means to do things, however things like the state monad and IO Monad help you to avoid saturating the stack. (monads are something like contexts in functional languages, I'll explain a bit of them later).

Let me show you, how high order functions really come into play:

Code:
slice l n = map (take n) $ takeWhile (/=[]) $ iterate (drop n) l

What this function does is it takes a list and slices it into n parts. i'll explain step by step how it does it.

first with have the function drop. What drop does is, it takes a list and drops the first n elements. so if I have drop 3 [7,3,4,5,5] The result will be [5,5].

Iterate is a very weird but useful function. What it does is that it makes an "infinite" list of a value applied by a function.

Let's say Iterate f l the result would be:
[l,f(l),f(f(l)),f(f(f(l))),....] and so on to infinity. Remember that haskell is LAZY, it will only do calculations when needed unlike most imperative languages. Thus, infinite lists are valid. Notice also that I passed as a function (drop n) a function and a parameter. In haskell this is valid (you wouldn't be able to do it in most interpreted languages).

If let's say drop would be defined as drop list times. I could do this:
Code:
iterate (flip.drop n) l

I just mixed two functions there and it works nicely.

Now let's talk of takeWhile. It's a function that will take elements from a list until the condition is not met for a found item. So if I have takeWhile (< 8) [6,7,3,9,3,2], it will 'return' [6,7,3].

finaly there's map and take, map applies a function to each element of the list, so map (*2) [1,2,3] gives [2,4,6]. Take just takes the n first elements of a list.

So let's do an example

let's say we have slice 3 [6,7,8,9,2,3,6,1,2,4].

first let's go to
iterate (drop n) l
it will be iterate (drop 3) [6,7,8,9,2,3,6,1,2,4]
the result will be:
[[6,7,8,9,2,3,6,1,2,4],[9,2,3,6,1,2,4],[6,1,2,4],[4],[],[]....]
obviously if computed directly, in lazy evaluation it will just compute what will be needed by next step.

now takeWhile (\=[]) ([[6,7,8,9,2,3,6,1,2,4],[9,2,3,6,1,2,4],[6,1,2,4],[4],[],[]....])
will just take
[[6,7,8,9,2,3,6,1,2,4],[9,2,3,6,1,2,4],[6,1,2,4],[4]]

lazy speaking it will do this
takewhile (\=[])
result <- []
check first on iterate,
iterate computes first element:
[6,7,8,9,2,3,6,1,2,4]
is \= [] ? Yes
add [6,7,8,9,2,3,6,1,2,4]
takewhile (\=[])
result <- [[6,7,8,9,2,3,6,1,2,4]]
check first on iterate,
iterate computes second element:
[9,2,3,6,1,2,4]
is \= [] ? Yes
add [9,2,3,6,1,2,4]
takewhile (\=[])
result <- [[6,7,8,9,2,3,6,1,2,4], [9,2,3,6,1,2,4]]
and so on...

now we compute map (take 3) on [[6,7,8,9,2,3,6,1,2,4],[9,2,3,6,1,2,4],[6,1,2,4],[4]]
it will give:
[[6,7,8],[9,2,3],[6,1,2],[4]]

again lazy speaking (it's hard to show lazyness)

Now what are the advantages of this?
  • The function works for any type of list. It can be filled with strings, numbers, structs, whatever you like.
  • The function is 100% pure thus the compiler can make it parallizeable without you having to worry about memory management (you can also tell the compiler how to spread the work, but that's more advanced). In fact any function in haskell is pure
  • The generated assembly code is not those 6 functions but 1 simplified fully optimized function that follows that behavior. Thus, haskell produces very fast code (sadly it eats the memory due to lazyness, but you can always make some functions run eager when you are sure it should be eager, this is done by mixing the function with the eager function : (($!).map)).
  • If you have a good notion of functional programming, it's easy to read, but again programming CORRECTLY in functional languages is learning to program again from the very start.

cons:
  • Even though the produced code is very fast, you can still make faster code in C or C++ through hacky methods, but probably most methods aren't pure or require extra knowledge of the list (like type and size).
  • This function can really use some memory due to lazyness.

As you can see, you can do very abstract stuffs at a decently low cost in haskell.

Obviously Haskell is not the answer for everything. A good software engineer must decide the domain based on the needs and how they fit. If the application you are making fits better for haskell then use it, if not then just go look for other option.

I normaly only use Haskell for datamining, since mathematical models fit better for it. Normaly Haskell is useful for transforming information or parsing it. This is not always the case for any program. Depends on the design and needs.
 
but probably most methods aren't pure or require extra knowledge of the list (like type and size).

C person: What's a type? /o/

In C++, the best solution is to not implement stuff like that yourself and just use whatever is defined in the standard, which is pretty cool considering the incredibly high chance that you're going to face a segfault at one point /o/

For std::list, there's list.slice.
For std::vector, you use iterator ranges and make a vector of vectors /o/
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
Made this JS equivalent just for fun, with some unconventional (unreadable) code:
PHP:
function split(l, n) {
  return l.reduce(function(p, c) {
    var a = p[p.length - 1];
    (a.length < n) ? a.push(c) : p.push([c]);
    return p;
  }, [[]]);
}

split([1, 2, 3, 4, 5, 6, 7], 3) // [[1, 2, 3], [4, 5, 6], [7]]

All the functions you showed can be implemented equivalently in pretty much every interpreted language, like I already wrote (in fact, show me such a language without those generic collection functions).

"Infinite lists" (and lazy functions in general) are generic iterators, or in other words, closures that act as iterators.

Here's the same functionality with lazy evaluation, using Lua's for ... in ... do:
PHP:
function split(t, n)
  local i = 1 - n
  
  return function ()
    if #t > i then
      i = i + n
      return {unpack(t, i, i + n - 1)} 
    end
  end
end

for v in split({1, 2, 3, 4, 5, 6, 7}, 3) do
  print(unpack(v)) -- 1 2 3
                   -- 4 5 6
                   -- 7
end

Having everything automatically inlined isn't always good. Functions exist for a reason.
This should either be user-controllable (C++), or the compiler should do some code diagnostics and see what is worth inlining (C++, others?), it isn't a black and white world.

Any dynamically typed language supports collections containing arbitrary typed elements.

Every function is pure not because it is good, but because that's what functional programming is about.
The thing is, I can't see you writing any real world code, because, like it or not, real world code needs state.
I can see a functional language used as a layer above an imperative one, which is a popular usage of high level languages in the first place.

So after all of this, all I can say is that functional languages have really nice syntactic sugar over common features which make them seem simpler (code length/logic wise).
The lack of state can be seen as a feature, because it makes it all math-like with pure functions, but it in fact removes any real world usage of such languages by themselves.

We were talking about this in the chat a little, and just for fun I compared functional languages to GPGPU.
They have the same exact properties, but GPGPU (either through shader programs, or through dedicated APIs (CUDA, OpenCL, DirectCompute)) will probably be quite faster.
Following this logic, functional languages are just an easier solution to mock up with until you are ready to face the beast (and this probably makes me sound like a hypocrite, since I keep telling people that speed doesn't matter as long as it's enough for their purpose, but what can I say, I am biased towards shaders because of my history with them).
 
Last edited:
C person: What's a type? /o/

In C++, the best solution is to not implement stuff like that yourself and just use whatever is defined in the standard, which is pretty cool considering the incredibly high chance that you're going to face a segfault at one point /o/

For std::list, there's list.slice.
For std::vector, you use iterator ranges and make a vector of vectors /o/

It's just an example for the sake of showing how high order functions work, I have all those functions in Haskell's list library. I also have a library for normal arrays, but I only use it for specific things that really need to be extremely fast.

All the functions you showed can be implemented equivalently in pretty much every interpreted language, like I already wrote (in fact, show me such a language without those generic collection functions).

"Infinite lists" (and lazy functions in general) are generic iterators, or in other words, closures that act as iterators.

The thing is not about what you can 'implement' but how fast and easy you can implement it. Anything I've done is possible in any language which is turing complete (CS Theory). The problem of your example is that you are not really mixing functions but just using functions as objects. In theory, functions in haskell are first grade functions when in dynamic languages(most interpreted languages) they are second grade and in static (C, C++, Java) they are third grade. Thus it gives you limitations when you try to manipulate them.

"Infinite lists" (and lazy functions in general) are generic iterators, or in other words, closures that act as iterators.

are is an incorrect word. "Works like" is a better word. The thing is that lazyness produces that behavior for that specific case.

Having everything automatically inlined isn't always good. Functions exist for a reason.
This should either be user-controllable (C++), or the compiler should do some code diagnostics and see what is worth inlining (C++, others?), it isn't a black and white world.

Is not always automatically inlined, the compiler most likely decides, depending on how optimized the final product is. It does it's job pretty well, it's all I care about. I've beaten even c++ code in CodeChef (C in the other hand it's impossible to beat if someone uses the best of it)

Every function is pure not because it is good, but because that's what functional programming is about.
The thing is, I can't see you writing any real world code, because, like it or not, real world code needs state.
I can see a functional language used as a layer above an imperative one, which is a popular usage of high level languages in the first place.
Who said it's pure for being good? Yes it's pure because it's functional, that's one of the main reasons for using functional programming.

About State, your answer is Monads. It's what we use for passing state in functional languages. I have actualy programmed a pong game in Haskell with OpenGL, using just IO monads. The interpreter I showed before, keeps the symbol table in the state monad. So yeah, I've done some Real World programming. I've also made a zip maker with Haskell. So yeah, it's not just a teaching language (actualy haskell as a teaching language is a terrible example since ppl who learn it as first grasp, can't learn it's true power due to the complexity of CS theory behind its feutures).

Explaining monads is a bit hard and takes time to grasp them, they are kind of boxes for values. Innerly they are some monoids which use lambda calculus to pass the value along an execution.

The thing is that you are judging the unknown. If you are interested on understanding modern functional languages, the best is that you ignore anything you know from the imperative languages.

For commercial uses, see the list above:
Haskell in the Industry

as you see it's mainly used for banking transaction systems. Just like another well used functional language called Erlang (made by Sonic Ericson).
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
Functions are first party citizens in pretty much any half decent language (as in not C derivatives (C++, Java, C#, ...)).

I checked wiki's page about Monad, and it looks to me like some more syntactic sugar over...normal functions, with normal state.

I get that with all this syntactic sugar you can probably write nicer code for specific purposes, but I still haven't seen any point that shows how imperative languages are "restrictive" in comparison to functional ones.
 
Functions are first party citizens in pretty much any half decent language (as in not C derivatives (C++, Java, C#, ...)).

What I was refering to grades. It has to do with the level of manipulation and threatment they get based on programming languages theory.

I checked wiki's page about Monad, and it looks to me like some more syntactic sugar over...normal functions, with normal state.

Incorrect. The best way to say monads are a definition for the behavior of how data is passed along some nested functions. Monads pass 'states' or 'contexts' (is better to use contexts) using lambda calculus. They are based on Cathegory Theory which is the theory that can be used to define program flow in a mathematical approach. There's indeed a syntactic suger called do-notation which makes IO context look more imperative, but inside it's just a continues passing of lambda calculus.

In C and C++ (like in most imperative languages) program flow is described through control flows (conditionals, loops, etc) and sequencing (; in c). All this can be described though monads. The problem in imperative languages is that the context is not kept 'boxed' thus operating over the context is not atomic and produces border effect, that's why we can't make sure a function in an imperative language is pure, while monads are pure. That's the cost of using variables.

I get that with all this syntactic sugar you can probably write nicer code for specific purposes, but I still haven't seen any point that shows how imperative languages are "restrictive" in comparison to functional ones.

both kind of languages(well taking of decent languages) are turing complete, therefore what you can do in one can be done in the other. You select a language based on the domain and needs. If you've mastered both functional and imperatives languages, you'll be able to decide which language will help you produce the disired software faster and more effectively. It aslo depends on your design decisions during the design phase. Picking a correct set of languages for each component/service you decided to make will help you optimize resources to the best.

I already mentioned the pros of functional programming, here's advantages and disadvantages:
  • purity -> easy parallization.
  • a lot easier to debug because it's declarative (functional languages, like logic languages) are declarative languages) This makes it easier to detect most bugs since the compilling phase will inform you more of possible bugs.
  • good for converting big amounts of data through simple functions/behaviors.
  • extremely good for: transaction systems, datamining, parsers and more.
  • not that good for graphics and interfaces.
  • can be embedded to other languages (combining languages saves lifes if you design your components correctly)
  • Bad for Hardcore Real Time systems.
  • safe from Bulk copy and other security threats.
  • Bad for aspect oriented programming.

For imperative languages, it depends of the kind of language if it's dynamic or static. It also depends on its paradigns if it's structural, OO(Object Oriented), AO(Aspect Oriented), EO(Event Oriented), IO (image Oriented, Sikuli for instance) and so on.

It's like you got this program and you need to have an eviroment that runs a big set of test cases on it. Making the enviroment in C, would be overkill. it's better to use an interpreted language like Python, Ruby, Pearl or even Shell. However, if you are programming a Real time operating system for controlling the brakes of a car, if it's not on C, then you are risking the system of not reacting on time, specialy if you decided to use a language with garbage collection. "Oh shit, your car crahed and your wife died because the brake system decided to clean memory in that milisecond it had to adjust the wheels to avoid a turn over". Every milisecond is important in Real time.

I also recalled when I was working on web development. we prefered Ruby on Rails and DJango over the java frameworks. Both were slower than java's frameworks but we could make software a lot faster with them (also Java is terrible for web development, it doesn't have a decent ORM library).
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
What I was refering to grades. It has to do with the level of manipulation and threatment they get based on programming languages theory.

I am not sure what you mean by grades.

Incorrect. The best way to say monads are a definition for the behavior of how data is passed along some nested functions. Monads pass 'states' or 'contexts' (is better to use contexts) using lambda calculus. They are based on Cathegory Theory which is the theory that can be used to define program flow in a mathematical approach. There's indeed a syntactic suger called do-notation which makes IO context look more imperative, but inside it's just a continues passing of lambda calculus.

In C and C++ (like in most imperative languages) program flow is described through control flows (conditionals, loops, etc) and sequencing (; in c). All this can be described though monads. The problem in imperative languages is that the context is not kept 'boxed' thus operating over the context is not atomic and produces border effect, that's why we can't make sure a function in an imperative language is pure, while monads are pure. That's the cost of using variables.

The wiki examples showed clear usage of variables.
What do you mean the context is not kept boxed?
Even C has contexts:
PHP:
void main()
{
    {
        {
         
        }
    }
}

In other languages it stands out more, especially when using functions as first class citizens.

(also Java is terrible for web development, it doesn't have a decent ORM library).

Tell that to big "web development" companies, because apparently to them Java is the holy grail of web development. They probably never touched any decent language in their lives. :)
 
I am not sure what you mean by grades.



The wiki examples showed clear usage of variables.
What do you mean the context is not kept boxed?
Even C has contexts:
PHP:
void main()
{
    {
        {
         
        }
    }
}

In other languages it stands out more, especially when using functions as first class citizens.

Even C has contexts:
C:
int outta = 0xB8;
void main()
{
    {
        {
            outta++;
        }
    }
    return outta >> 1;
}

I didn't say it didn't have context, but state is not kept along contexts. This function here is unpure now. Adding an static variable inside also makes it unpure.

Btw in C functions are not first class citizens, they are third class. When you save or pass a function you are only passing the pointer to it, not the function itself. In second class you can pass the function as an static object, while in first class is a dynamic object.

Tell that to big "web development" companies, because apparently to them Java is the holy grail of web development. They probably never touched any decent language in their lives. :)

It addapts to their domain, since most of these companies must make fast software. It also has to do with the clients support over their software. Most big clients keep the support themselves and therefore must hire people who can manage the technologies within. Hiring a guy who knows PHP or Java with knowledge on Structs, Spring or other framework is not as costy as hiring an engineer with knowledge in Ruby on Rails or DJango. A Spring-java consultant earns between 80k and 100k $ a year while a Ruby Rails consultant earns between 110k$ and 145k$.

Also, These companies normaly also have their own ORMs and components which really ease working in Java. That's the case of google and facebook. Google has it's own framework for Java, while Facebook has it#s framework for php and a 'translator' (compiler is an incorrect term) from php to c. There's also the case that these two companies must mantain projects that have been up for years. Ruby on Rails and DJango are pretty recent technologies. So is twitter, who uses Ruby on rails.
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
As far as I know Google use Go nowadays.

I am not sure what you mean by passing functions as static or dynamic objects.
As far as I can see, "static" would be C/C++ with function pointers, seeing that they are not even objects to begin with.

And again, you are just limiting yourself to local variables, but those are still normal functions with a different name.
 
As far as I know Google use Go nowadays.

I am not sure what you mean by passing functions as static or dynamic objects.
As far as I can see, "static" would be C/C++ with function pointers, seeing that they are not even objects to begin with.

And again, you are just limiting yourself to local variables, but those are still normal functions with a different name.

In google they use many languages, Go is one of them, but they don't use it for web development that much. They use Java, Python, Ruby, Haskell, Go, Erlang and not sure if they use a variant of SQL(postgre, oracle, bl2(or whatever ibm's whats called), etc), NoSQL or an own language. They also have their own convections, for instance, employees cannot code using exception handlers.

I am not sure what you mean by passing functions as static or dynamic objects.
As far as I can see, "static" would be C/C++ with function pointers, seeing that they are not even objects to begin with.

in third class. The function is fully compiled and static in memory. You cannot edit the function or create new new unless ofcourse you kep the function in heap and recompile it on realtime or in bytecode or whatever. Afterwards use any of the hacky methods to run it. In second class, the function is kept as an object and can be added to memory or not if needed (or interpreted instead). In first class, the function is not only an object but it's also modifyable. The case of haskell is a lot more complex, since it won't try to generate code for an specific function but instead for the program's flow. It will also depend if the function is in an static library or a dynamic library.

And again, you are just limiting yourself to local variables, but those are still normal functions with a different name.

And again you are just thinking imperative. There's no such thing as variables in functional languages. A function with a constant value is just a constant function. The thing is that through monads, all the program is seen as one big pure function, instead of many functions.
 
Level 29
Joined
Jul 29, 2007
Messages
5,174
In google they use many languages, Go is one of them, but they don't use it for web development that much. They use Java, Python, Ruby, Haskell, Go, Erlang and not sure if they use a variant of SQL(postgre, oracle, bl2(or whatever ibm's whats called), etc), NoSQL or an own language. They also have their own convections, for instance, employees cannot code using exception handlers.

I remember reading that Go was specifically made for their massive web servers, with goroutines being a big player.

in third class. The function is fully compiled and static in memory. You cannot edit the function or create new new unless ofcourse you kep the function in heap and recompile it on realtime or in bytecode or whatever. Afterwards use any of the hacky methods to run it. In second class, the function is kept as an object and can be added to memory or not if needed (or interpreted instead). In first class, the function is not only an object but it's also modifyable. The case of haskell is a lot more complex, since it won't try to generate code for an specific function but instead for the program's flow. It will also depend if the function is in an static library or a dynamic library.

What exactly do you mean by modifiable?
For example, you can change the source of a function object in JS, but I don't quite understand in what context that can be helpful.

And again you are just thinking imperative. There's no such thing as variables in functional languages. A function with a constant value is just a constant function. The thing is that through monads, all the program is seen as one big pure function, instead of many functions.

That's literally a C program with no global variables, if you were to inline everything inside main().

Taken from an example on wiki:
PHP:
main :: IO ()
main = do
  putStrLn "What is your name?"
  name <- getLine
  putStrLn ("Nice to meet you, " ++ name ++ "!")

Call it whatever you want, but name is clearly a local variable.

It's not that I have anything against a language being pure syntactic sugar, if it's easier to express yourself for a set of problems, but I just don't understand why you don't seem to agree it is indeed a big collection of syntactic sugar features.
 
I remember reading that Go was specifically made for their massive web servers, with goroutines being a big player.
it's for controlling the data flow between front end web servers and back end web servers. it's also used for organizing requests, but the true "server" code is written normaly in other languages. Think of it as Apache in php, Apache is the front end web server (written in C++ or C not sure) and your php code is the back end server code.

What exactly do you mean by modifiable?
For example, you can change the source of a function object in JS, but I don't quite understand in what context that can be helpful.

In imperative languages that has few usages, in haskell because of high order functions and function currying, it's a very desireable feuture. Generating dynamic code in imperative is mainly used for JIT Compilers (like Wc3's inner jass compiler) or hacking.

That's literally a C program with no global variables, if you were to inline everything inside main().

and isn't that what's kind of generated in assembly? If you think the text area of memory as a main function (don't confuse it with ur program's main) and the static area as it's variables, that's pretty much it.

Taken from an example on wiki:
PHP:
main :: IO ()
main = do
  putStrLn "What is your name?"
  name <- getLine
  putStrLn ("Nice to meet you, " ++ name ++ "!")

Call it whatever you want, but name is clearly a local variable.

It's not that I have anything against a language being pure syntactic sugar, if it's easier to express yourself for a set of problems, but I just don't understand why you don't seem to agree it is indeed a big collection of syntactic sugar features.

Okay i'll answer this in two parts. First what you call "local variables", in haskell, are not really local variables, they are actualy function arguments in a nested lambda calculus. Let's take a look at a simple example that disprove what you just said:
Code:
funcHaskellHasNoVariables = do
    x <- [1,3]
    y <- [2,4]
    return [x,y]

if x and y would be variables the evaluation of that function should give us "[[1,3],[2,4]]". The fact is that it actualy yields [[1,2],[1,4],[3,2],[3,4]] aka the cartesian product. Now how that happend? the fact is that there's indeed one but only one syntactic sugar there. The do-notation is syntactic sugar for a monadic binding. I'll rewrite the function as it actualy is:
Code:
funcHaskellHasNoVariables = 
    ([1,3] >>= (\x -> ([2,4] >>= (\y -> (return [x,y])))))

you see a nested lambda calculus. Have in mind that return in haskell is not an instruction its a function. It's the neutral function of a monad. Now >>= is the function binding of a monad. These two functions work differently depending on the type of monad you are actualy using. (In this case we got the IO Monad and the List Monad)

In your example the do notation can be easily changed to:
Code:
main :: IO ()
main = 
  putStrLn "What is your name?" >>
  (getLine >>= (\name -> putStrLn ("Nice to meet you, " ++ name ++ "!") ))

where f >> g = f >>= (\_ -> g)

Here'sa nice children's book for haskell: Learn You Some Haskell for great good
or if you feel like looking at serious haskell books: Real World Haskell

I suggest you give it a try.
 
Level 6
Joined
Aug 26, 2009
Messages
192
Guys, low-level-wise, what are the advantages of functional languages over imperative ones? Do they take less time, are less memory intensive, or?

Differs. With a good compiler/optimizer i think you can obtain the same results. Anyway, I think imperative ones are usually faster, since in pure functional languages recursion is the only possible way to create loops. And since non-tail-recursions have to put the functioncalls on the stack, they're going to take much more memory.
 
Differs. With a good compiler/optimizer i think you can obtain the same results. Anyway, I think imperative ones are usually faster, since in pure functional languages recursion is the only possible way to create loops. And since non-tail-recursions have to put the functioncalls on the stack, they're going to take much more memory.

In theory any loop behavior can translated into recursion. Normaly if you don't require an explicit/implicit stack in the loop, then it can be translated to tail-recursion.

The reasons why some (not all) imperative languages are faster has more to do with how much they allow you to control low-level feutures.

Normaly according to latest benchmarks as I recall them (could look for sources, but it is hard to find just one that explains the hierarchy above, normaly would need to get about 20 and elaborate how they are all connected)

In general:

C > C++ > Java +/-= Haskell >> Python > Prolog

that's to give a general idea. Haskell is sometimes a bit faster or slower than java, depends on what you implement.

There's also in which terms you define performance . If you define it simply as mediun run time for multiple cases in different languages. Then the last holds.

The thing is. How can I achieve acceptable performance with the less effort? Normaly Functional languages win that argument. If you want I can fully elaborate in this point, I think it's quite interesting.
 
Well I know that every recursion can be transformed into a loop. But still. It differs what the compiler/optimizer does. If it optimizes tail-recursions then they should be as fast as loops in imperative languages.

it kinda depends. Using foldl' correctly is faster than any loop in C, unless you do loop unfolding in a processor without predictive branching.

The thing is that normaly modern pure functional languages are very minimalistic languages contrary to modern imperative languages. Pretty much all the constructs in pure functional languages are built within the language. Things like Monads, functors, and most basic functions are built within the language. Inpurities are normaly OS independent bindings built within the language, but boxed in a monadic contest.

This affects how compiling is done in each case. Normaly imperative languages are reduced to a minimalistic language and optimized there to later be compiled to assembly.

Functional languages are compiled in other ways. They are actualy represented in a mathematical graph using cathegory theory and then they start transforming the graph to more strict representations of it by adding properties. Once they have the disired graph, it can easily be transformed to C--, ASM or LLVM. C-- for portability, ASM for direct compiling, LLVM for further optimizations and portability.
 
Status
Not open for further replies.
Top