You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: summer_course2020.md
+22-15Lines changed: 22 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,20 @@
1
1
## Got Behavior? Get Poses ... <imgsrc="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1572296495650-Y4ZTJ2XP2Z9XF1AD74VW/ke17ZwdGBToddI8pDm48kMulEJPOrz9Y8HeI7oJuXxR7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UZiU3J6AN9rgO1lHw9nGbkYQrCLTag1XBHRgOrY8YAdXW07ycm2Trb21kYhaLJjddA/DLC_logo_blk-01.png?format=1000w"width="350"title="DLC-live"alt="DLC LIVE!"align="right"vspace = "50">
2
2
3
-
This document is an outline of resources for an informal "spring/summer course" for those wanting to learn to use DeepLabCut while responsibilty isolating due to COVID-19. We except it to take roughly 1 week to get through.
3
+
This document is an outline of resources for an informal "spring/summer course" for those wanting to learn to use DeepLabCut while responsibilty isolating due to COVID-19. We except it to take *roughly* 1-2 weeks to get through alone, or you can join the course and it will be spread out over 3 weeks.
4
+
5
+
**UPDATE:** We will also be organzing a "strcutured" version with a kick-off webinar (**JUNE 5TH!**), then a weekly check each week (for 3 weeks) to have Q & A style discussions with the core development team! If you want to be a part of this course, please sign up by **JUNE 3RD**!! https://forms.gle/KRtdKKYB57ZkaBwH7
6
+
7
+
:purple_heart: We would also be very excited if you contributed to the newly launched DeepLabCut Model Zoo while you learn! Namely, you can learn to use DeepLabCut on data that can be used to build better community tools!! Please contact us in this form if you are interested: https://forms.gle/KRtdKKYB57ZkaBwH7:purple_heart:
4
8
5
9
www.deeplabcut.org
6
10
7
-
We suggest student self organize into groups to work through this together. Perhaps find each other on Gitter or Twitter:
11
+
You can also chat with one another on Gitter or Twitter:
:purple_heart:**We would also be very excited if you contributed to the newly launched DeepLabCut Model Zoo while you learn!**
12
15
13
-
Namely, you can learn to use DeepLabCut on data that can be used to build better community tools!! Please contact us in this form if you are interested: https://forms.gle/KRtdKKYB57ZkaBwH7:purple_heart:
14
16
15
-
## Quick Start:
17
+
## Quick Start: (before the kick-off)
16
18
17
19
You need: Anaconda for python3 and DeepLabCut installed (CPU version)
18
20
- you should have a [CPU version of DeepLabCut installed on your laptop](https://github.com/AlexEMG/DeepLabCut/blob/master/conda-environments/README.md). We will assume you don't all have GPUs at home, so we will
@@ -45,10 +47,11 @@ utilize cloud-computing resources for those steps.
45
47
46
48
-**WATCH:** There are a lot os docs... where to begin: [Video Tutorial!](https://www.youtube.com/watch?v=A9qZidI7tL8)
47
49
48
-
### **Module 1: getting started on your own data**
50
+
### **Module 1: getting started on data**
49
51
50
-
What you need: any videos where you can see the animals/objects, etc.
52
+
**What you need:** any videos where you can see the animals/objects, etc.
51
53
You can use our demo videos, grab some from the internet, or use whatever older data you have. Any camera, color/monchrome, etc will work. Find diverse videos, and label what you want to track well :)
54
+
- IF YOU ARE PART OF THE COURSE: you will be contributing to the DLC Model Zoo :smile:
52
55
53
56
:purple_heart:**NOTE:** if you want to contribute back to community-science, please get in touch with us as we have a LOT of data we want to label to be able to share back with everyone; So, if you want to help sign up here (labeling can be on data we provide or possibly yours): https://forms.gle/KRtdKKYB57ZkaBwH7:purple_heart:
54
57
@@ -57,7 +60,9 @@ You can use our demo videos, grab some from the internet, or use whatever older
57
60
-**READ ME PLEASE:**[DeepLabCut, the user guide](https://rdcu.be/bHpHN)
58
61
-**WATCH:** Video tutorial 1: [using the Project Manager GUI](https://www.youtube.com/watch?v=KcXogR-p5Ak)
59
62
- Please go from project creation (use >1 video!) to labeling your data, and then check the labels!
60
-
-**WATCH:** Video tutorial 2: [using ipython/pythonw (more functions!)](https://www.youtube.com/watch?v=7xwOhUcIGio)
63
+
-**WATCH:** Video tutorial 2: [using the Project Manager GUI for multi-animal pose estimation](https://www.youtube.com/watch?v=Kp-stcTm77g)
64
+
- Please go from project creation (use >1 video!) to labeling your data, and then check the labels!
65
+
-**WATCH:** Video tutorial 3: [using ipython/pythonw (more functions!)](https://www.youtube.com/watch?v=7xwOhUcIGio)
- Please go from project creation (use >1 video!) to labeling your data, and then check the labels!
63
68
@@ -67,7 +72,9 @@ You can use our demo videos, grab some from the internet, or use whatever older
67
72
-**Slides:**[Overview of creating training and test data, and training networks](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/master/part2-network.pdf)
68
73
-**READ ME PLEASE:**[What are convolutional neural networks?](https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53)
69
74
70
-
-**READ ME PLEASE:** Here is a new paper from us describing challenges in robust pose estimation, why PRETRAINING really matters - which was our major scientific contribution to low-data input pose-estimation - and it describes new networks that are availble to you. [Pretraining boosts out-of-domain robustness for pose estimation](https://paperswithcode.com/paper/pretraining-boosts-out-of-domain-robustness)
75
+
-**READ ME PLEASE:** Here is a new paper from us describing challenges in robust pose estimation, why PRE-TRAINING really matters - which was our major scientific contribution to low-data input pose-estimation - and it describes new networks that are availble to you. [Pretraining boosts out-of-domain robustness for pose estimation](https://paperswithcode.com/paper/pretraining-boosts-out-of-domain-robustness)
76
+
77
+
-**MORE DETAILS:** ImageNet: check out the original paper and dataset: http://www.image-net.org/ (link to [ppt from Dr. Fei-Fei Li](http://www.image-net.org/papers/ImageNet_2010.ppt))
71
78
72
79
Before you create a training/test set, please read/watch:
73
80
-**More information:**[Which types neural networks are available, and what should I use?](https://github.com/AlexEMG/DeepLabCut/wiki/What-neural-network-should-I-use%3F)
@@ -81,19 +88,19 @@ You can use our demo videos, grab some from the internet, or use whatever older
81
88
82
89
-**Slides**[Evalute your network](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials/blob/master/part3-analysis.pdf)
83
90
-**WATCH:**[Evaluate the network in ipython](https://www.youtube.com/watch?v=bgfnz1wtlpo)
84
-
- why evaluation matters; how to benchmark; analyzing an video and using scoremaps, conf. readouts, etc
91
+
- why evaluation matters; how to benchmark; analyzing an video and using scoremaps, conf. readouts, etc.
85
92
86
93
### **Module 4: Scaling your analysis to many new videos**
87
94
88
95
-[Analyzing videos in batches, over many folders, setting up automated data processing](https://github.com/DeepLabCut/DLCutils/tree/master/SCALE_YOUR_ANALYSIS)
89
-
96
+
97
+
- How to automate your analyze in the lab: [datajoint.io](datajoint.io), Cron Jobs: [schedule your code runs](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/)
90
98
### **Module 5: Got Poses? Now what ...**
91
99
92
100
-[Helper code and packages for use on DLC outputs](https://github.com/DeepLabCut/DLCutils)
101
+
102
+
- Course subscribers, we will go into depth on several ways to analyze your data. Please sign up! :smile:
0 commit comments