msapaydin
Joined 92 karma
Mehmet Serkan Apaydın
Assistant professor, Computer Science, Acıbadem University Istanbul
deep learning, natural language processing, bioinformatics, computer vision applications in health
http://msapaydin.wordpress.com
- msapaydin parenthttps://cdn.cs50.net/ai/2023/x/lectures/6/src6/markov/# This is a nice Markov text generator.
- There is a famous Turkish Sumerologue who was influential in early Turkish republic, so many schools in Turkey were named after Sumers. Muazzez İlmiye Çığ is her name. https://en.wikipedia.org/wiki/Muazzez_%C4%B0lmiye_%C3%87%C4%...
- 2 points
- 1 point
- I was awarded an FPGA chip by Xilinx that I blogged about: http://msapaydin.wordpress.com They are making an effort to promote the use of Xilinx fpga's on machine learning, recently.
- 2 points
- Extremely well written article for someone who would like to dig deeper into forecasting, actually giving someone ne valuable links about the prophet tool that i used and recommended. I agree with the author that it is strange that the job posting only mentions the prophet tool, which is rather like a very basic and one particular library that i usually recommend to undergrads for their senior project.
- I think that not being able to reproduce the results claimed in a paper is not specific to ML research. While working as a post-doc at a top university research lab, i spent years trying to understand how it can be that some software that was supposed to corresponds to the well cited paper did not even come close to reproducing the results of the said paper, and that the primary author went on to become a prof at a top university in the US. In short, scientific fraud is also quite common, in most academic papers.
- I taught an OS class (for the first time) in my career with this book (I taught many other classes before). It was a breeze, compared to the books used by faculty who taught earlier versions of the class. Students much enjoyed this book as well. The fact that it is freely available was a great plus.
- I am using a pomodoro app (productivity challenge timer) and as a result track my time. The smartphone also tracks walking and running and I have recently started tracking sleep as well. It does 80% of the tracking automatically I guess. I also use habitica as a complementary tracking tool.
- I guess my point is that the parent comments referred to Maradona rather than to Diego. The documentary made me realize that it is actually not him but all those Napoli fans that grasped him that caused his demise. If he were able to leave Napoli for a calmer retirement perhaps he would not become what he eventually became. It was the greed of Ferraino (Napoli FC President) which kept him in Napoli and caused him to become infamous, according to the documentary.
- I guess there are things that are inevitable, the trick is not to try to completely eliminate them as that is impossible. However the optional ones could be eliminated if one has some practice of doing the trade-off analysis and perhaps deliberate poverty practice, and with some luck could become wealthy enough to be able to delegate even the most difficult to eliminate tasks to one's assistant.
- Some projects started by researchers from Stanford University seem to address these issues. Some keywords I have come across are MLflow, sisu and Databricks. The last one is aka spark. Sisu is a company I did not try, and I had trouble working with MLflow however the ideas are worth taking a look.
- Another attempt to gpt-3 filtering the intro section of the paper: This book is about how we can guess what will happen in the future. The first chapter is about how we can guess the next number in a sequence. The second chapter is about how we can guess the next word in a sentence. The third chapter is about how we can guess the next word in a book. The fourth chapter is about how we can guess the next word in all the books ever written. The fifth chapter is about how we can guess the next word in all the books ever written
- This is also gpt-3 output based on the abstract as if explaining to an eight grader: The theory of everything (TOE) is a mathematical formula that can predict everything that will happen in the future. It is a theory of everything, so it can predict what will happen to the universe, and to us. We assume that the TOE is simple enough for us to understand, but we don't know for sure.
- Here is what comes out: "The history of our universe is a sequence of random events. Each event is the result of a very small probability, and the events are independent of each other. The probability that any particular event will happen is very small. So the probability that any particular event will happen twice is even smaller. So the probability that any particular event will happen twice in a row is even smaller. So the probability that any particular event will happen three times in a row is even smaller. And so on."
- >essentially a classification problem, with extra steps.
I think it is a question answering problem, with the extra of being able to say "I don't know", rather than a classification problem. The current transformers models for question answering such as huggingface pipeline implementations do not handle the "I don't know" case and this is an attempt to see whether gpt-3 can be trained to have this extra functionality easily.
>results are not universally consistent.
I think the relative probabilities are what matters and those may be more consistent than the absolute probabilities.