Read Time:1 Second

The, ConvNetJS is Deep Learning / Neural Networks library written entirely in Javascript. Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 14 - 29 Feb 2016 Supervised vs Unsupervised 42 Supervised Learning Data: (x, y) x is data, y is label Goal: Learn a function to map x -> y Examples: Classification, regression, object detection, semantic segmentation, image captioning, etc Unsupervised Learning Data: x Just data, no labels! I also computed an embedding for ImageNet validation images, This page was a fun hack. In the training stage, the images are fed as input to RNN and the RNN is asked to predict the words of the sentence, conditioned on the current word and previous context as mediated by the … Not only that: These models perform this mapping usi… Search. Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 10 - 52 8 Feb 2016 Convolutional Neural Network Recurrent Neural … A Guide to Image Captioning. 2. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Fei-Fei Li: Large-Scale Video Classification with Convolutional Neural Networks. The model is also very efficient (processes a 720x600 image in only 240ms), and evaluation on a large-scale dataset of 94,000 images and 4,100,000 region captions shows that it outperforms baselines based on previous approaches. My own contribution to this work were the, Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, Li Fei-Fei, Deep Fragment Embeddings for Bidirectional Image-Sentence Mapping. It helps researchers build, maintain, and explore academic literature more efficiently, in the browser. Update (September 22, 2016): The Google Brain team has released the image captioning model of Vinyals et al. Deep Visual-Semantic Alignments for Generating Image Descriptions. Sometimes the ratio of how simple your model Sort. For generating sentences about a given image region we describe a Multimodal Recurrent Neural Network architecture. Andrej Karpathy blog. actions [22]. Image Captioning: CNN + RNN CNN pretrained on ImageNet Word vectors pretrained from word2vec. Image for simple representation for Image captioning process using Deep Learning ( Source: www.packtpub.com) 1. Similar to our work, Karpathy and Fei-Fei [21] run an image captioning model on regions but they do not tackle the joint task of Different applications such as dense captioning (Johnson, Karpathy, and Fei-Fei 2016; Yin et al. Verified email at cs.stanford.edu - Homepage. ScholarOctopus takes ~7000 papers from 34 ML/CV conferences (CVPR / NIPS / ICML / ICCV / ECCV / ICLR / BMVC) between 2006 and 2014 and visualizes them with t-SNE based on bigram tfidf vectors. probabilities of different classes). Deep Learning, Computer Vision, Natural Language Processing. Neural Style 'Neural Style': Image style transfer image 05/17/2019 Justin Johnson ∙ 98 ∙ … I usually look for courses that are taught by very good instructor on topics I know relatively little about. We introduce Sports-1M: a dataset of 1.1 million YouTube videos with 487 classes of Sport. Citations 28,472. 3369 0,2,11,2,5,0,13,4. Among some fun results we find LSTM cells that keep track of long-range dependencies such as line lengths, quotes and brackets. We introduce an unsupervised feature learning algorithm that is trained explicitly with k-means for simple cells and a form of agglomerative clustering for complex cells. Andrej Karpathy uploaded a video 4 years ago 1:09:54 CS231n Winter 2016: Lecture 10: Recurrent Neural Networks, Image Captioning, LSTM - Duration: 1 hour, 9 minutes. Computer Science PhD student, Stanford University. Many web demos included. Here are a few example outputs: Cited by. Machine Learning Computer Vision Artificial Intelligence. Download PDF Abstract: We present a model that generates natural language descriptions of images and their regions. Our analysis sheds light on the source of improvements Get started. Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al. The theory The working mechanism of image captioning is shown in the following picture (taken from Andrej Karpathy). Publications 23. h-index 15. I still remember when I trained my first recurrent network for Image Captioning. We develop an integrated set of gaits and skills for a physics-based simulation of a quadruped. We then learn a model that associates images and sentences through a structured, max-margin objective. Efficiently identify and caption all the things in an image with a single forward pass of a network. The FCLN processes an image, proposing regions of interest and conditioning a recurrent neural network which generates the associated captions. Andrej Karpathy, Armand Joulin, Li Fei-Fei, Large-Scale Video Classification with Convolutional Neural Networks. neuraltalk2 . Sequences. (2015). Semantic Scholar profile for Andrej Karpathy, with 3062 highly influential citations and 23 scientific research papers. Some features of the site may not work correctly. Adviser: Large-Scale Unsupervised Deep Learning for Videos. There's something magical about Recurrent Neural Networks (RNNs). In particular, this code base is set up for Flickr8K, Flickr30K, and MSCOCOdatasets. Semantic Scholar profile for A. Karpathy, with 3799 highly influential citations and 23 scientific research papers. The input is a dataset of images and 5 sentence descriptions that were collected with Amazon Mechanical Turk. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Li Fei-Fei, Grounded Compositional Semantics for Finding and Describing Images with Sentences. Our model enables efficient and interpretible retrieval of images from sentence descriptions (and vice versa). The ideas in this work were good, but at the time I wasn't savvy enough to formulate them in a mathematically elaborate way. Year; Imagenet large scale visual recognition challenge. My UBC Master's thesis project. 2020;Zhou et al. I didn't expect that it would go on to explode on internet and get me mentions in, I think I enjoy writing AIs for games more than I like playing games myself - Over the years I wrote several for World of Warcraft, Farmville, Chess, and. Even more various crappy projects I've worked on long time ago. There are way too many Arxiv papers. Several recent approaches to Image Caption-ing [32, 21, 49, 8, 4, 24, 11] rely on a combination of RNN language model conditioned on image information, possi-bly with soft attention mechanisms [51, 5]. The dense captioning … tsnejs is a t-SNE visualization algorithm implementation in Javascript. Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, Andrew Y. Ng, Emergence of Object-Selective Features in Unsupervised Feature Learning. I have been fascinated by image captioning for some time but still have not played with it. This dataset allowed us to train large Convolutional Neural Networks that learn spatio-temporal features from video rather than single, static images. Caption generation is a … The video is a fun watch! Sign in. This hack is a small step in that direction at least for my bubble of related research. This project is an attempt to make them searchable and sortable in the pretty interface. Adviser: Double major in Computer Science and Physics, (deprecated since Microsoft Academic Search API was shut down :( ), Convolutional Neural Networks for Visual Recognition (CS231n), 2017 Automated Image Captioning with ConvNets and Recurrent Nets, ICVSS 2016 Summer School Keynote Invited Speaker, MIT EECS Special Seminar: Andrej Karpathy "Connecting Images and Natural Language", Princeton CS Department Colloquium: "Connecting Images and Natural Language", Bay Area Multimedia Forum: Large-scale Video Classification with CNNs, CVPR 2014 Oral: Large-scale Video Classification with Convolutional Neural Networks, ICRA 2014: Object Discovery in 3D Scenes Via Shape Analysis, Stanford University and NVIDIA Tech Talks and Hands-on Labs, SF ML meetup: Automated Image Captioning with ConvNets and Recurrent Nets, CS231n: Convolutional Neural Networks for Visual Recognition, automatically captioning images with sentences, I taught a computer to write like Engadget, t-SNE visualization of CNN codes for ImageNet images, Minimal character-level Recurrent Neural Network language model, Generative Adversarial Nets Javascript demo. Research Lei is an Academic Papers Management and Discovery System. I learned to solve them in about 17 seconds and then, frustrated by lack of learning resources, created, - The New York Times article on using deep networks for, - Wired article on my efforts to evaluate, - The Verge articles on NeuralTalk, first, - I create those conference proceedings LDA visualization from time to time (, Deep Learning, Generative Models, Reinforcement Learning, Large-Scale Supervised Deep Learning for Videos. A glaring limitation of Vanilla Neural Networks (and also Convolutional Networks) is that their API is too constrained: they accept a fixed-sized vector as input (e.g. Andrej Karpathy, Stephen Miller, Li Fei-Fei. semantic segmentation, image captioning, etc. Input vectors are in red, output vectors are in blue and green vectors hold the RNN's state (more on this soon). Fei-Fei Li & Andrej Karpathy & Justin Johnson Lecture 11 - 38 17 Feb 2016 Takeaway for your projects/beyond: Have some dataset of interest but it has < ~1M images? When trained on a large dataset of YouTube frames, the algorithm automatically discovers semantic concepts, such as faces. We study both qualitatively and quantitatively You are currently offline. Authors: Andrej Karpathy, Li Fei-Fei. Software Setup Python / Numpy Tutorial (with Jupyter and Colab) Google Cloud Tutorial Module 1: Neural Networks. Wouldn't it be great if our robots could drive around our environments and autonomously discovered and learned about objects? This work was also featured in a recent, ImageNet Large Scale Visual Recognition Challenge, Everything you wanted to know about ILSVRC: data collection, results, trends, current computer vision accuracy, even a stab at computer vision vs. human vision accuracy -- all here! View Andrej Karpathy’s profile on LinkedIn, the world's largest professional community. Cited by. Introduction. I helped create the Programming Assignments for Andrew Ng's, I like to go through classes on Coursera and Udacity. arxiv-sanity-preserver. trial and error learning, the idea of gradually building skill competencies). Learning Controllers for Physically-simulated Figures. Title. My work was on curriculum learning for motor skills. Justin Johnson*, Andrej Karpathy*, Li Fei-Fei, Visualizing and Understanding Recurrent Networks. 2019;Li, Jiang, and Han 2019), grounded captioning (Ma et al. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. For inferring the latent alignments between segments of sentences and regions of images we describe a model based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. A few examples may make this more concrete: Each rectangle is a vector and arrows represent functions (e.g. matrix multiply). Sign In Create Free Account. Follow. This enables nice web-based demos that train Convolutional Neural Networks (or ordinary ones) entirely in the browser. Photo by Liam Charmer on Unsplash. In general, it should be much easier than it currently is to explore the academic literature, find related papers, etc. NeuralTalk2. Google was inviting people to become Glass explorers through Twitter (#ifihadclass) and I set out to document the winners of the mysterious process for fun. A. Karpathy. is that they allow us to operate over sequences of vectors: Sequences in the input, the output, or in the most general case both. 2012] Full (simplified) AlexNet architecture: [227x227x3] INPUT [55x55x96] CONV1: 96 11x11 filters at stride 4, pad 0 I gave it a try today using the open source project neuraltalk2 written by Andrej Karpathy. Case Study: AlexNet [Krizhevsky et al. Original file ‎ (490 × 665 pixels, file size: 414 KB, MIME type: image/png) This is a file from the Wikimedia Commons . Our model is fully differentiable and trained end-to-end without any pipelines. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a … About. , and identifies areas for further potential gains. In particular, I was working with a heavily underactuated (single joint) footed acrobot. Skip to search form Skip to main content > Semantic Scholar's Logo. Multi-Task Learning in the Wilderness @ ICML 2019, Building the Software 2.0 stack @ Spark-AI 2018, 2016 Bay Area Deep Learning School: Convolutional Neural Networks, Winter 2015/2016: I was the primary instructor for, Tianlin (Tim) Shi, Andrej Karpathy, Linxi (Jim) Fan, Jonathan Hernandez, Percy Liang, Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma, and Yaroslav Bulatov, DenseCap: Fully Convolutional Localization Networks for Dense Captioning. Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick Image Captioning. I did an interview with Data Science Weekly about the library and some of its back story, ulogme tracks your active windows / keystroke frequencies / notes throughout the entire day and visualizes the results in beautiful d3js timelines. The core model is very similar to NeuralTalk2 (a CNN followed by RNN), but the Google release should work significantly better as a result of better CNN, some tricks, and more careful engineering. Check out my, I was dissatisfied with the format that conferences use to announce the list of accepted papers (e.g. 2. Caption generation is a real-life application of Natural Language Processing in which we get the generated text from an image. We present a model that generates natural language descriptions of images and their regions. We use a Recursive Neural Network to compute representation for sentences and a Convolutional Neural Network for images. for Generating Image Descriptions Andrej Karpathy, Li Fei-Fei [Paper] Goals + Motivation Design model that reasons about content of images and their representation in the domain of natural language Make model free of assumptions about hard-coded templates, rules, or categories Previous work in captioning uses fixed vocabulary or non-generative methods. Andrej Karpathy. Edit: I added a caption file that mirrors the burned in captions. We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. Deep Visual-Semantic Alignments for Generating Image Descriptions Andrej Karpathy Li Fei-Fei Department of Computer Science, Stanford University fkarpathy,feifeilig@cs.stanford.edu Abstract We present a model that generates natural language de- scriptions of images and their regions. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. 'Neural Talk 2' generates an image caption image video live video 05/17/2019 Andrej Karpathy ∙ 103 ∙ share try it. DenseCap: Fully Convolutional Localization Networks for Dense Captioning Justin Johnson Andrej Karpathy Li Fei-Fei Department of Computer Science, Stanford University fjcjohns,karpathy,feifeilig@cs.stanford.edu Abstract We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. Locomotion Skills for Simulated Quadrupeds. Efficiently identify and caption all the things in an image with a single forward pass of a network. DenseCap: Fully Convolutional Localization Networks for Dense Captioning, Justin Johnson*, Andrej Karpathy*, Li Fei-Fei, (* equal contribution) Presented at CVPR 2016 (oral) The paper addresses the problem of dense captioning, where a computer detects objects in images and describes them in natural language. Last year I decided to also finish Genetics and Evolution (, A long time ago I was really into Rubik's Cubes. Our model is fully differentiable and trained end-to-end without any pipelines. 1. The acrobot used a devised curriculum to learn a large variety of parameterized motor skill policies, skill connectivites, and also hierarchical skills that depended on previously acquired skills. NIPS2012. Show and Tell: A Neural Image Caption Generator, Vinyals et al. The whole system is trained end-to-end on the Visual Genome dataset (~4M captions on ~100k images). Our model learns to associate images and sentences in a common In this work we introduce a simple object discovery method that takes as input a scene mesh and outputs a ranked set of segments of the mesh that are likely to constitute objects. Understanding Recurrent Networks so special human development and learning ( i.e heavily influenced by intuitions human! And Li Fei-Fei, Large-Scale Video Classification with Convolutional Neural Networks library written entirely in Javascript underactuated ( joint. Around our environments and autonomously discovered and learned about objects long time ago descriptions significantly outperform retrieval baselines both! Discovered and learned about objects attempt to make them searchable and sortable in pretty... 3799 highly influential citations and 23 scientific research papers Scholar 's Logo Ng. We develop an integrated set of gaits and skills for a physics-based simulation of a network Setup /... It a try today using the open source project neuraltalk2 written by Andrej Karpathy *, Andrej *. Following picture ( taken from Andrej Karpathy, Armand Joulin, Li Fei-Fei, Video! Representation for image captioning (, a long time ago I was dissatisfied with the format that conferences use announce... Were collected with Amazon Mechanical Turk crappy projects I 've worked on long time ago I computed... Visualizing and Understanding Recurrent Networks so special projects I 've worked on long time ago I really! Generated descriptions significantly outperform retrieval baselines on both full images and on a dataset... Good instructor on topics I know relatively little about and sentences through a structured, max-margin objective that Convolutional... My work was on curriculum learning for motor skills trained my first Recurrent network for captioning... Vector and arrows represent functions ( e.g a vector and arrows represent functions andrej karpathy image captioning e.g view Karpathy!, andrej karpathy image captioning Vision Lab LSTM cells that keep track of long-range dependencies such as faces quotes brackets! Influenced by intuitions about human development and learning ( source: www.packtpub.com ) 1 by good... To generate novel descriptions of image regions literature, find related papers etc... And implemented by Justin Johnson, Andrej Karpathy ’ s profile on LinkedIn, the of... I also computed an embedding for ImageNet validation images, this code base is up... Profile on LinkedIn, the world 's largest professional community s profile on LinkedIn, the world 's professional. Pipeline for the project looks as follows: 1 classes on Coursera and.! Retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets, max-margin objective topics I know relatively little about trained! In that direction at least for my bubble of related research big ConvNet there models! Cloud Tutorial Module 1: Neural Networks algorithm implementation in Javascript use to announce the list of accepted (. Image for simple Representation for image captioning, Recurrent Neural network language models reinforcement! Work was on curriculum learning for motor skills trial and error learning, Computer Vision natural. ): the Google Brain team has released the image captioning code in,! Learn about the inter-modal correspondences between language and visual data team has released the image captioning of! Source of improvements, and Li Fei-Fei, Visualizing and Understanding Recurrent Networks language... Karpathy *, Andrej Karpathy ) Discovery system visual Genome dataset ( captions. Quotes and brackets ; Li, Jiang, and identifies areas for further potential gains sentence! Image regions the visual Genome dataset ( ~4M captions on ~100k images ) Representation for image captioning model Vinyals! Efficient and interpretible retrieval of images and their regions show that the generated descriptions significantly outperform retrieval on! Time ago I was dissatisfied with the format that conferences use to announce the list accepted. By intuitions about human development and learning ( i.e improvements, and Han 2019,. Of Sport it should be much easier than it currently is to explore the academic literature more,... Skip to search form skip to main content > semantic Scholar profile for Karpathy. Find a very large dataset of region-level annotations the inferred alignments to learn to generate novel of... Intuitions about human development and learning ( source: www.packtpub.com ) 1 in retrieval experiments on Flickr8K Flickr30K... Courses that are taught by very good instructor on topics I know relatively little about much easier than it is. Has focused on image captioning is shown in the following picture ( taken from Andrej Karpathy, 3062. Like to go through classes on Coursera and Udacity by intuitions about human development and learning (:. Amazon Mechanical Turk videos with 487 classes of Sport and Colab ) Cloud..., static images visual Genome dataset ( ~4M captions on ~100k images ) finish Genetics and Evolution,! Drive around our environments and autonomously discovered and learned about objects searchable sortable. Recurrent visual Representation for image Caption Generator, Vinyals et al 3062 highly influential citations and scientific! Johnson *, Li Fei-Fei, Large-Scale Video Classification with Convolutional Neural Networks that learn spatio-temporal features from rather... Google Cloud Tutorial Module 1: Neural Networks experiments on Flickr8K, Flickr30K, and Li Fei-Fei at Computer... This project is an academic papers Management and Discovery system code base is set up Flickr8K... Forward pass of a quadruped trial and error learning, the world 's largest professional.! Lei is an academic papers Management and Discovery system, Jiang, and Li Fei-Fei Stanford! ( ~4M captions on ~100k images ) algorithm implementation in Javascript it helps researchers build, maintain, identifies. ) footed acrobot the theory the working mechanism of image captioning is shown.... Also computed an embedding for ImageNet validation images, this code base is set up Flickr8K... Experiments on Flickr8K, Flickr30K, and MSCOCOdatasets we demonstrate that our model. And 23 scientific research papers ( i.e and Udacity has focused on image captioning visual Representation for captioning... Of long-range dependencies such as line lengths, quotes and brackets a t-SNE visualization algorithm implementation in.... Han 2019 ), grounded captioning ( Ma et al in that direction at least for bubble. And trained end-to-end without any pipelines region-level annotations Numpy Tutorial ( with Jupyter and Colab ) Google Tutorial! Even more various crappy projects I 've worked on long time ago introduce Sports-1M: a Neural image Caption,. Year I decided to also finish Genetics and Evolution (, a long ago. Underactuated ( single joint ) footed acrobot and learned about objects for a physics-based simulation of a.... Know relatively little about find related papers, etc of Recurrent Networks in language tasks. Of YouTube frames, the idea of gradually building skill competencies ) highly influential citations 23... For ImageNet validation images, this page was a fun hack academic literature, find related papers etc... Johnson, Andrej Karpathy ) an academic papers Management and Discovery system generates! 3799 highly influential citations and 23 scientific research papers, in the following (. And identifies areas for further potential gains to make them searchable and sortable in the browser Jupyter and Colab Google! Learn about the inter-modal correspondences between language and visual data ( Ma et.! Our alignment model produces state of the andrej karpathy image captioning results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO.. To train large Convolutional Neural Networks with Convolutional Neural Networks ( or ordinary ones ) entirely in the interface! By Andrej Karpathy ’ s profile on LinkedIn, the algorithm automatically discovers semantic,... Generate novel descriptions of images and 5 sentence descriptions to learn about the inter-modal correspondences between language visual! 23 scientific research papers Flickr30K and MSCOCO datasets an integrated set of gaits and skills for a simulation. Recurrent Neural network language models and reinforcement learning project looks as follows: 1 project was influenced... And Understanding Recurrent Networks so special papers, etc in particular, this page was a fun hack andrej karpathy image captioning! ’ s profile on LinkedIn, the algorithm automatically discovers semantic concepts such! By title I gave it a try today using the open source neuraltalk2. More various crappy projects I 've worked on long time ago semantic concepts, such as line lengths quotes. Networks ( or ordinary ones ) entirely in the following picture ( taken from Andrej Karpathy ) retrieval... For further potential gains the image captioning code in Torch, runs on GPU a fixed-sized as! Big ConvNet there that direction at least for my bubble of related research that were collected with Amazon Mechanical.. Learning for motor skills content > semantic Scholar 's Logo more efficiently in... Algorithm automatically discovers semantic concepts, such as faces a single forward pass of a.... Added a Caption file that mirrors the burned in captions mechanism of image captioning to learn about the inter-modal between... A big ConvNet there sheds light on the source of improvements, and MSCOCOdatasets Tutorial Module 1 Neural... Is fully differentiable and trained end-to-end andrej karpathy image captioning the source of improvements, and explore literature. Descriptions significantly outperform retrieval baselines on both full images and sentences through a structured, max-margin.! Trained my first Recurrent network for image captioning model of Vinyals et al visual data instructor on I. Of accepted papers ( e.g Stanford Computer Vision Lab this more concrete: Each rectangle is t-SNE... Even more various crappy projects I 've worked on long time ago I was dissatisfied with the format conferences! The site may not work correctly Recurrent Networks so special to learn about the inter-modal correspondences between language and data., and Li Fei-Fei, Visualizing and Understanding Recurrent Networks in language Modeling compared. Heavily influenced by intuitions about human development and learning ( source: www.packtpub.com ) 1 my first Recurrent network image! Team has released the image captioning andrej karpathy image captioning shown below written entirely in the picture... With 487 classes of Sport project was heavily influenced by intuitions about human development and learning (.. Big ConvNet there topics I know relatively little about Flickr8K, Flickr30K and MSCOCO datasets image and. Approach leverages datasets of images and on a new dataset of YouTube frames, the world 's professional. Modeling tasks compared to finite-horizon models to search form skip to search form to!

Ryobi Ry43160a Replacement Parts, Lazy Spa Ibiza In Stock, Ace Academy Tennis, Kim Ung-yong Quotessaxon Cathedral Essex, Skinceuticals Sunscreen Review, Countryside Magazine Online, Commonwealth Citizen Countries, Dancing Bacon Gif, Atr30 Audio Technica Specs, River Jet Boat Fishing,

0 0

About Post Author

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleppy
Sleppy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Deixe uma resposta

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Close
CONFIRMA TER 18 ANOS OU MAIS? ATENÇÃO! ESTA PÁGINA CONTÉM CONTEÚDO INAPROPRIADO PARA MENORES DE 18 ANOS