Category: bjwwot

EVENT Sensory Friendly of Diary of a Wimpy Kid Rodrick Rules

first_imgGirl in Movie TheaterWhat: A sensory friendly showing of  the 20th Century Fox film  Diary of a Wimpy Kid : Rodrick Rules presented by Goodrich Quality Theaters.When: Saturday, April 2 at 10amWhere: Hamilton 16 IMAX in Noblesville, IN.Who Should Attend: Families with children who have special needs. The “Lights Up, Sound Down.  Lights Up, Sound Down” event gives families with children who have special needs the opportunity to enjoy their favorite movies in a comfortable, sensory – friendly setting with the lights turned up and volume turned down in the auditorium. The shows are offered at regular matinee prices and there are concession discounts for groups of 15 or more!This week’s sensory friendly movie is sponsored by WestPoint Financial Group. Share this…TwitterFacebookPinterestLinkedInEmailPrint RelatedEVENT: Sensory Friendly Screening of Disney’s TangledDecember 3, 2010In “Conferences and Events”EVENT: Sensory Friendly Presentation of The Green Hornet 2DFebruary 3, 2011In “Conferences and Events”EVENT: Sensory Friendly Screening of Gnomeo and JulietMarch 3, 2011In “Conferences and Events”last_img read more

How The Recent Changes in Facebook Impacts People with Disabilities

first_imgAn Overview Of How The Recent Changes In Facebook Impacts People With Disabilities M. Wade Wingler, ATPAnytime one of the major social media tools, like Facebook, makes major changes there is outcry from the general user base.  Although these changes are typically made in an effort to improve and expand the usefulness of tools, most users complain about having to re-learn how to use them.  Generally, these cries die away and the “new way” of using these tools simply becomes “the way” of using these tools.However, there have been some recent changes to Facebook that promise to have a significant impact on how people with some types of disabilities use the tools.  Some of these changes are positive and some are negative.  Here are a few of both the positive and negative impacts:Integrated Skype video chat impact the DeafPeople who are Deaf and use American Sign Language often prefer to use video phone or web camera communication with others who are Deaf.  Now that Skype has built-in, live video chat, ASL users can easily use their native language to communicate over long distance in a convenient way.News Ticker impacts people who are blindAlthough Facebook has always presented problems for people who are blind or visually impaired and rely on screen reading systems, the recent changes to the way news is presented in a ticker format is particularly problematic.  In our initial testing with the Apple VoiceOver screen reader, only the news items that are first presented are read to the blind user.  Anything that appears as part of the “ticker” feature is not read.Screen layout changes impact users with low vision Many users with low vision rely on screen magnification systems.  The user experience is a little like looking at your screen through a telescopic lens that provides very large print, but only displays a small portion of the screen at a time.  When Facebook or other web sites re-arrange items for efficiency or easier use, screen magnifier users are forced to learn the new locations and find them with the telescope-style view.  This learning curve can be frustrating, especially when many changes are made in a short period of time.Simpler content arrangement impacts users with learning disabilitiesSometimes social networks are intricate and complicated.  When complicated pages are re-ordered, re-arranged and simplified, those changes have a positive impact on people who have cognitive challenges and some learning disabilities.Social media in general impacts people with mobility impairmentsSocial media platforms, especially Facebook, inherently remove physical barriers for people with mobility difficulties.  It’s often much easier for a person who uses a wheelchair or walker to connect with a friend or colleague over Facebook, rather than travelling long distances to meet them.Wade Wingler is the director of assistive technology at Easter Seals Crossroads in Indianapolis, Indiana.  He has worked in the field for nearly 20 years and speaks nationally and internationally on the topic of how technology impacts the lives of people with disabilities.  His weekly podcast “Assistive Technology Update” is a well-known source for up-to-the-minute news on assistive technology.Share this…TwitterFacebookPinterestLinkedInEmailPrint RelatedATU355 – Apps that every person who is Deaf or Hard of Hearing must have – Greg Gantt, Indiana RelayMarch 16, 2018In “Assistive Technology Update”Express Your Voice by Casting Your VoteFebruary 24, 2016In “Communication”What Video Chatting Means for the Deaf CommunityJuly 13, 2011In “Easter Seals Crossroads”last_img read more

Color ID Free helps Blind Smartphone Users Identify Colors

first_imgBy now, you’ve probably caught on that we at the INDATA Project at crazy for new technology, especially smartphone and tablet apps that make assistive technology more accessible to all people. If you’re interested in learning about assistive technology mobile apps, you can find more information here.Today, we’re highlighting the Color ID Free app. It’s available for both the iPhone and the iPad in the Apple App Store.It uses your phone’s camera to detect the color of everyday objects and say the color aloud. This app is particularly useful for people who are visually impaired. You can tap the color square at the top left corner to toggle between Simple Colors and Exotic Colors. The color names are fun and specific. Some examples are: Paris Daisy, Lavender Rose and Moon Mist. The app also supports Simple Colors like blue, light green and dark red. It will also tell you the hex value of the color, so you can identify exactly what color the camera sees.This assistive technology app can be helpful for a person who is visually impaired or has difficulty distinguishing different colors from each other. Imagine not being able to determine if your shirt matches your pants; this app can help with everyday decisions like that.The app is developed by Greengar Studios.Share this…TwitterFacebookPinterestLinkedInEmailPrint RelatedColor ID Free helps Blind Smartphone Users Identify ColorsJuly 31, 2012In “Easter Seals Crossroads”iPhone Apps for the Visually ImpairedJuly 26, 2011In “Accessibility Reviews”ATFAQ054 – Q1 Accessibility vs Universal Design Q2 Adaptive Xbox controls Q3 Where to find apps Q4 Color ID app Q5 Office Lens Q6 Your first computerMay 22, 2017In “Assistive Technology FAQ (ATFAQ) Podcast”last_img read more

Coggy Brain Teaser Puzzle

first_imgLooking for a fun, challenging puzzle for your child?  Check out the Coggy brain teaser puzzle by Fat Brain Toys.  Coggy is a “folding, clicking puzzle of arranging gears to match challenge cards.”The puzzle features 16 colorful gears.  There are four levels of difficulty: easy, medium, hard, extra hard.  Not only is this a brain teaser, it’s also a great fidget toy!  Children can also flip Coggy over to reveal alternating black and white gears.Features of the Coggy Brain Teaser Puzzle:Folding, clicking puzzle of arranging gears to match challenge cardsGears each shift up to 255º for “cognitively exciting flexibility”Feel the gears click as they fall perfectly into placeEncourages the following skills:Visual-spatialCritical thinkingLogicFine motorAnd more Includes Coggy and 40 challenge cardsChain of gears measures 14 inches long“High-quality design and materials for lasting durability”Ideal for children ages 6 and upShare this…TwitterFacebookPinterestLinkedInEmailPrint RelatedAM308 – Coggy Brain Teaser PuzzleDecember 14, 2018In “Accessibility Minute”Stimtastic: Affordable toys for stimming behaviorJuly 15, 2015In “Autism”7 Games and Toys for the Visually ImpairedMay 20, 2015In “Products and Devices”last_img read more

Consumer Demand for Personalisation and Tech Advances Drives Innovation in Entertainment and

first_imgNow it’s getting personal. According to PwC’s Global Entertainment & Media Outlook 2019–2023, consumers are embracing the expanding opportunities to enjoy media experiences tailored to their needs, and companies are designing offerings and business models to revolve around those personal preferences. In a fundamental shift, they’re leveraging data and usage patterns to pitch their products not at audiences of billions, but at billions of individuals.The result is an emerging world of media that’s more personal than ever before: one in which empowered consumers control their own media consumption via an expanding range of smart devices, curate their personal selection of channels via over-the-top (OTT) services and bring more media content into their lives by embracing the smart home and connected car. It’s also an increasingly mobile world, soon to be augmented by 5G networks. As personal connections proliferate, however, consumers continue to be concerned about the safety and privacy of the data. With trust at a premium, pressure is intensifying on companies to adapt to new privacy regimes.Global industry growth continues to outpace GDP…These profound shifts are taking place against a background of ongoing global growth in entertainment & media (E&M) revenues. The Outlook — which provides revenue data and forecasts for 14 industry segments across 53 territories — projects that total global spending on E&M will rise at a compound annual growth rate (CAGR) of 4.3% over the next five years, to 2023. This growth rate will see the industry’s global revenue reach US$2.6tn in 2023, up from US$2.1tn in 2018. Over the forecast period, six segments will exhibit growth above the average, and seven below it. (The 14th segment, data consumption, does not generate revenue.)…but with sharp differences in growth rates at the segment levelLooking at specific E&M segments, virtual reality (VR) maintains its position as the highest-growth segment, but — after a year in which consumers’ take-up continued to lag behind expectations — its lead over the OTT video segment is greatly diminished. Podcasts and esports, which sit within larger segments, have extremely strong growth revenue forecasts at CAGRs of 28.5% and 18.3%, respectively.At the lower end of the growth spectrum, the traditional TV and home video segment now has negative growth expectations for the first time, as cord-cutting by consumers continues to rise and sales of DVDs keep plummeting. The print-exposed newspapers and consumer magazines segment has the worst forecast through to 2023, with revenues projected to suffer a compound annual decline of 2.3%.Marketing Technology News: PROS Launches Sales Agreement Management to Streamline Selling in Digital EraInnovating for growth in a world of me mediaThe underlying shift that’s reshaping and reorienting the entire industry is changing human behaviour, with a decisive turn towards personalisation. At one level, the new world of E&M appears more isolated, with growing numbers of people engrossed in their own choice of content. But there’s also a dimension of personalisation that’s inherently social, as people share playlists on music-streaming services, recommend movies to friends on social platforms or engage in multi-user video game battle royales.Advances in technology and service offerings are finally enabling people to move from passive to active consumption — not just of individual pieces of media, but of media as a whole. Many signs of this change are pinpointed in the Outlook. One is the trend for consumers to reject the bundles of channels offered by cable or satellite providers, and instead construct their own ad hoc bundles made up of OTT services. Global OTT revenue hit US$38.2bn in 2018, and is forecast to almost double by 2023. Another sign is the rise of the smart home, with ownership of smart speaker devices set to rise at a 38.1% CAGR to hit 440mn devices globally in 2023.Wilson Chow, Global Technology, Media and Telecommunications Leader and Partner, PwC China, comments:“5G’s impact will be felt across the entire technology, media and telecommunications value chain for the next decade. It will hasten existing trends towards personalisation, making it easier, more convenient and cheaper to access more media on phones and other mobile devices. Key impacts of 5G for E&M will include enabling more streaming of high-quality video — including of live events like sports and music — and better use of AI, together with massive opportunities for video games and VR in terms of speed and quality of images.”Marketing Technology News: Agency Veteran Joao Machado Joins as Sabio’s SVP of Product MarketingFour priorities shaping companies’ strategiesAs E&M companies reinvent their organisations and offerings for an increasingly personalised world, four priorities are coming to the fore:One size does not fit all: As companies approach both markets of individuals and individual geographic markets, they are finding that it makes sense to present different options: all-you-can-eat offerings with unlimited usage in some areas, tiers of payments for different services in less developed markets, and competing on affordability. Meanwhile, across all markets — mature and developing — PwC’s research finds stark differences in terms of segment growth.The number of consumer touch points is expanding: As media and e-commerce experiences become more personal, gratification for consumers is becoming more instant and immediate. In response, content creators and distributors are devising new ways to appeal to consumers as individuals and marketers are figuring out how to meet consumers at the point of consumption and point them instantaneously towards purchase. Witness the rise of shoppable online advertising, often promoted by ‘influencers.’ Voice is also becoming a key form of interaction for both search and shopping, supported by the rise of smart speakers.Technological innovation introduces a new era of personalised computing: Companies are leveraging AI’s ability to understand people’s individual tastes and consumption habits to offer up the content individual users find most compelling. The combination of AI with 5G will be especially powerful, as it will fuel the rapid growth of segments such as video games and VR. The Outlook forecasts show video games’ compelling combination of growth and scale, while VR will be the fastest-growing segment overall.Trust and regulation remain pivotal, as personal data hygiene becomes key: With consumers moving to the centre of their own world of media experiences, their personal data — from the music they stream and the news they read to the products they buy — is taking a central role. In the emerging world, maintaining personal data hygiene is becoming key to the overall health of the E&M ecosystem. For companies, this goes beyond regulatory compliance, which is merely table stakes, and extends to building trust by behaving transparently and responsibly with customers’ data, ensuring the accuracy of news, and being sensitive to concerns around issues such as digital addiction.Ennèl van Eeden, Global Entertainment and Media Leader and Partner, PwC Netherlands, comments:“The personalisation wave — fuelled by evolving customer behaviour — is set to be amplified by the forces of technology, scale, and aggressive investing and competition. The implications for organisations are profound. As the borders separating former media silos erode, companies need to think more broadly about the areas and segments in which they operate. At the same time, all E&M players must take the need to ‘know your customer’ more seriously, and marketers need to allocate their time and attention to new types of content and platforms — influencers, live events, ads inside apps and more. Finally, companies must focus intently on their core capabilities and geographical markets, while continually scanning the horizon for new developments and regulations, and being agile in responding to technological developments such as 5G. Put simply: it’s time to get personal with consumers — or be left out of the conversation.”Marketing Technology News: Mastercard Digital Wellness Program to Enhance Transparency, Security and Choice for Online Shopping 5G NetworksGlobal Entertainment & MediaMarketing Technology NewsNewspersonalisationPWCvirtual reality Previous ArticleLogi Analytics Acquires Zoomdata, Extending Market Leadership in Embedded AnalyticsNext ArticleMapbox Launches Data Services to Deliver High-Quality Map Data To Customers Consumer Demand for Personalisation and Tech Advances Drives Innovation in Entertainment and Media Industry PRNewswireJune 11, 2019, 1:09 pmJune 11, 2019 last_img read more

Sound That BRANDS Raises Capital Forming Core Leadership Team

first_imgLos Angeles Podcast Studio to Specialize in Branded ContentSound That BRANDS, the Los Angeles-based podcasting studio specializing in branded audio content for national advertisers, announced that Emmis Communications would provide a round of funding that will allow rapid expansion.“At Sound That BRANDS, our motto is ‘Be the content, not the interruption’,” said Dave Beasing, CEO of Sound That BRANDS and a veteran media consultant.  “With Emmis’ support, we’ll grow quickly, producing audio that is not only entertaining and informative, but builds brand loyalty.” Marketing Technology News: Experian Appoints Shri Santhanam as Executive Vice President and General Manager of Global Analytics and AI“We’ve monitored the early stages of global brands telling audio stories through branded podcasts that dramatically enhances loyalty and preference,” said Emmis Chairman and CEO Jeff Smulyan. “Sound that BRANDS has already become a leader in branded podcasting and is uniquely positioned to work with marquee brands to build episodes that make you laugh, feel and think in a way that reinforces the brand.  Dave is a great audio storyteller, and we look forward to helping him accelerate Sound that BRANDS’ growth.”Major brands like Trader Joe’s, Facebook, Smead, Tindr, General Electric and McAfee have recently produced branded podcasts.Marketing Technology News: Lattice Engines Ranked a Leader in B2B Customer Data Platform Report by Independent Research FirmFast Company has called branded podcasts, “the ads that people actually want to listen to.” According to survey data released by Edison Research (April 4, 2019), podcast listenership is booming.  32% of Americans aged 12 and older say they have listened to a podcast in the past month.  Of those, 54% say they are more likely to consider the brands they hear advertised on podcasts.Sound That BRANDS is seeking motivated talent for several key roles, including VP/Revenue and Brand Partnerships.  Former KPWR-FM Los Angeles SVP/GM Val Maki is heading the search.  Interested parties should apply confidentially on the company’s website, That BRANDS is an equal opportunity employer and welcomes inquiries from all locations.Marketing Technology News: Popular Ticketing Platform Ticketbud Announces Integration with Salesforce Audio ContentEmmis CommunicationsMarketing TechnologyNewsSoundThatBRANDS Previous ArticleRainFocus Delivers the Event Industry’s Most Advanced Machine Learning ToolNext ArticleHot Topics Recap: Cannes Lions 2019 Sound That BRANDS Raises Capital; Forming Core Leadership Team PRNewswireJune 20, 2019, 10:15 pmJune 20, 2019 last_img read more

NASA Installs SuperCam Instrument on Mars 2020 Rover

first_img Helen From ‘Waterworld’ Is 56 Now and Incredibly Gorgeous By Ryan Whitwam on July 6, 2019 at 9:03 am Why Billy Hargrove from ‘Stranger Things’ Looks So Familiar NASA’s Mars 2020 rover is slowly taking shape. After adding the rover’s stereoscopic navigation cameras and prototype wheels, engineers at NASA’s Jet Propulsion Laboratory (JPL) have attached the rover’s SuperCam Mast Unit. This vital piece of equipment will allow Mars 2020 to analyze samples from a distance as it searches for signs of life. The SuperCam is mounted to one side of the mas unit. NASA seems to encourage personification of its robots with all the selfies, so you’ll probably think of this as the rover’s “head.” Curiosity has the same mast design, but its instrument is known as the ChemCam. Curiosity’s instrument has a laser-induced breakdown spectroscopy (LIBS) and a Remote Micro Imager (RMI) telescope. The SuperCam, as the name implies, is a more advanced version of the ChemCam. It has a LIMS, a raman spectrophotometer, an IR spectrophotometer, and a telephoto camera. The SuperCam is a collaboration between researchers in the US, France, and Spain. The final component, the IR spectrophotometer, arrived from France a few weeks ago, allowing JPL to complete the SuperCam installation. NASA says the SuperCam will allow Mars 2020 to analyze samples for signs of life, even if they died off millions of years ago. The SuperCam can focus on a pencil point sample from more than 20 feet (6 meters) away. So, the controllers back on Earth won’t need to make as many small adjustments to Mars 2020’s location just to scan a new sample. The Mars 2020 rover will have a total of 23 cameras when it’s done.In the next few weeks, JPL engineers will install the Mars 2020 Sample Caching System. That apparatus will use 17 separate motors to scoop up samples of Martian soil to store inside the rover. There are vague plans to one day send a follow-up mission to collect those samples and return them to Earth. NASA plans to launch Mars 2020 next summer — it will have a real name by then. Upon reaching Mars in February 2021, the rover will land in Jezero crater with a rocket sled almost identical to Curiosity’s. Once on the surface, Mars 2020 will be able to search for biosignatures that would go unnoticed by Curiosity. NASA plans for a mission at least one Martian year in duration (just shy of two Earth years). If it’s anything like Curiosity, it’ll operate much longer.Now read:NASA Sends Tiny Atomic Clock to Space Get a Birds-Eye View of NASA’s Mars 2020 Rover Landing ZoneNASA Tests Mars 2020 Helicopter Under Mars-Like Conditions The First ‘Scary MCU Film’ Will Come Out During Phase 4 You Might Also LikePowered By ZergNet The Surprising Reason Why Brandon Routh Is Returning to Superman New Look At The Knights Of Ren From ‘Rise Of Skywalker’ Revealed Tagged In sciencespacemarsroversmars 2020 Post a Comment 2 Comments Saying Goodbye to Johnny Depp Facebook Twitter Linkedin Pinterest Google Plus Reddit Hacker News Flipboard Email Copy 0shares This site may earn affiliate commissions from the links on this page. Terms of use. We Now Understand Why Sean Connery Has Disappeared NASA Installs SuperCam Instrument on Mars 2020 Rover 2 Comments New TV Shows That Will Absolutely Get Canceled in 2019last_img read more

New Solar Panels Use Waste Heat to Purify Water

first_img 14 Comments ‘Endgame’ Directors Apologize To Tom Holland The Transformation of Daisy Ridley Is Turning Heads Wesley Snipes Breaks Silence On Marvel’s Huge ‘Blade’ Decision The Possibility Of An ‘Iron Man 4’ Finally Addressed Dani From ‘Girl Next Door’ is Absolutely Gorgeous Now at 36 New Solar Panels Use Waste Heat to Purify Water You Might Also LikePowered By ZergNet By Ryan Whitwam on July 11, 2019 at 8:40 am Facebook Twitter Linkedin Pinterest Google Plus Reddit Hacker News Flipboard Email Copy 0shares This site may earn affiliate commissions from the links on this page. Terms of use. Helen From ‘Waterworld’ Is 56 Now and Incredibly Gorgeous Solar panels can bring electricity to remote areas, and photovoltaic technology has improved in recent years. However, many of these regions also have limited access to clean water. A new type of solar power setup developed at King Abdullah University of Science and Technology in Saudi Arabia could address both of those issues. These panels leverage waste heat from solar panels to distill and purify water. The researchers used salt water to test the technology, but it should work equally well on fresh water that simply isn’t fit for human consumption. The solar panels sit on top of a multi-layer box where they can absorb sunlight and generate power. Below that is a three-stage distillation unit. There’s nothing particularly special about the solar panels used in the system. About 10 percent of the sunlight hitting the photovoltaic cells goes toward generating power. While much higher efficiencies have been demonstrated in the lab, most commercial cells are only a little more efficient. However, those panels aren’t also generating water. As the panels pull in solar radiation to make power, some of the energy radiates out as waste heat. The King Abdullah University of Science and Technology design directs that heat down into the first layer of water purification. It heats seawater, causing it to evaporate and re-condense as clean, fresh water. The heat generated by the first layer passes through a membrane into the second distillation layer where it purifies more water, and it’s the same for the third (bottom) layer. It’s essentially a very fancy stacked solar still — the researchers note the solar panel design purified three times as much water as a solar still, and you get electricity, too. According to the team, water passed through the device comes out safe for drinking by all measurable standards. The levels of lead, copper, sodium, calcium, and magnesium after filtration are all below the thresholds set by the World Health Organization. You might be able to deploy a more efficient solar power farm and water purification rig as separate systems, but the system developed by these researchers could be a major breakthrough because it can do both at the same time. The team sees this as an ideal solution for remote areas where people have limited power and ample access to undrinkable water. With more efficient solar panels, they say, this system could eventually generate 10 percent of the world’s fresh water.Now read:RoboFly Is the First Wireless Insectoid Robot to Take FlightMIT’s ‘Sun in a Box’ Could Solve Our Energy Storage WoesSaudi Arabia, SoftBank Plan the World’s Largest Solar Project Jason Momoa’s Marriage Just Keeps Getting Weirder and Weirder The Truth About ‘I Am Jazz’ Tagged In scienceengineeringsolar powerwaterwater purification Post a Comment 14 Commentslast_img read more

How Deepfakes and Other RealityDistorting AI Can Actually Help Us

first_img This Movie Will Have More Fight Scenes Than Any Other MCU Film Tagged In scienceartificial intelligenceaineurosciencepsychologydeepfakeshuman brain Post a Comment 16 Comments 16 Comments Facebook Twitter Linkedin Pinterest Google Plus Reddit Hacker News Flipboard Email Copy 0shares This site may earn affiliate commissions from the links on this page. Terms of use. We’re not far from the day when artificial intelligence will provide us with a paintbrush for reality. As the foundations we’ve relied upon lose their integrity, many people find themselves afraid of what’s to come. But we’ve always lived in a world where our senses misrepresent reality. New technologies will help us get closer to the truth by showing us where we can’t find it.From a historical viewpoint, we’ve never successfully stopped the progression of any technology and owe the level of safety and security we enjoy to that ongoing progression. While normal accidents do occur and the downsides of progress likely won’t ever cease to exist, we make the problem worse when trying to fight the inevitable. Besides, reality has never been as clear and accurate as we want to believe. We fight against new technology because we believe it creates uncertainty when, more accurately, it only shines a light on the uncertainty that’s always existed and we’ve preferred to ignore.We Already Accept False Realities Every DayThe dissolution of our reality—a fear brought on by artificial intelligence—is a mirage.  For a good while, we’ve put our trust in what we see and hear throughout our lives, whether in the media or from people we know. But neither constitutes reality because reality has never been absolute.  Our reality is a relative construct.  It’s what we agree upon together based on the information we gain from our experience.  By observing and sharing our observations we can attempt to construct a picture of an objective reality.  Naturally, that goal becomes much harder to achieve when people lie or utilize technology that makes convincing lies more possible. It seems to threaten the very stability of reality as we know it.But our idea of reality is flawed.  It’s comprised of human observation and conjecture.  It’s limited by how our bodies sense the world around us and how our brains process that acquired information.  Although we may capture a lot, we can only sense a sliver of the electromagnetic spectrum and even that constitutes too much for our brains to process at once.  Like the healing brush in Photoshop, our brains fill in the gaps in our vision with its best guess at what belongs.  You can test your blind spots to get a better idea of how this works or just watch it in action by looking at an optical illusion like this:Image credit: BrainDen | Scroll or turn your head if you don’t see motionThis, among other cognitive processes, produces subject versions of reality. You already cannot experience every aspect of a moment, and you certainly won’t remember every detail. But on top of that, you don’t even see everything you see.  Your brain constructs the missing parts, hides visual information (especially when we’re moving), makes you hear the wrong sounds, and can mistake rubber limbs for your own. When you have a limited view of any given moment and the information you obtain isn’t fully accurate, to begin with, you’re left with a subjective version of reality than you’re able to gauge. Trusting collective human observations led us to believe geese grew on trees for about 700 years. Human observations, conclusions, and beliefs are not objective reality. Even in the best of circumstances we will, at times, get things extraordinarily wrong.Everything you know and understand passes through your brain and your brain doesn’t offer an accurate picture of reality.  To make matters worse, our memories often fail us in numerous ways.  The way we see the world is neither true or remotely complete. So, for a long time, we have relied on other people to help us understand what’s true.  That can work just fine in many situations, but sometimes people will lie or have vastly different versions of the same situation due to past experiences.  Either way, problems occur when subjective observations clash and people cannot agree upon what really happened.  Technology has helped us improve upon that problem—technology we widely feared during its initial introduction.We Either Trust or Distrust Technology Too MuchThroughout time we’ve created tools to help us survive as a species.  By developing new tools we’ve been able to spread information more easily and create a sense of trust. Video and audio recordings allowed us to bypass the brain’s processes and record an un-augmented record of an event—at least, from a singular point of view.  A video camera still fails to capture the full reality of a given moment.Security footage can look bad or even creepy but that doesn’t always indicate a real problem. | Image credit: Horror Freak NewsFor example, imagine someone pulls out a knife in a fight and fakes a swipe to try and frighten their attacker without any intention of doing actual harm. Video surveillance paints a different picture without this context. To an officer of the law, the security footage will show assault with a deadly weapon. With no other evidence to provide context, the officer has to err on the side of caution and make an arrest.Whether or not such assumptions lead to less crime or more questionable arrests doesn’t change the fact that an objective recording of reality misses information. We trust recordings as truth when they only offer a part of the truth. When we trust video, audio, or anything that cannot tell the full story, we put our faith in a medium that lies by omission by design—just like any observer of reality.Faults exist in technology but that doesn’t offer cause to discard it. Overall, we’ve benefited from advancements that allowed objective recordings of the world around us. Not all recordings require additional context. A video of a cute puppy might not be cute to everyone, but—for the most part—people will agree they’re seeing a puppy. Meanwhile, we used to call the sky green and can’t agree on the color of a dress in a bad photograph. As technology progresses and becomes accessible to more and more people, we all begin to learn when and how it can paint reality with a less accurate brush than we liked to believe.This realization causes fear because our system of understanding the world starts to break down. We can’t rely on the tools we once could to understand our world. We have to question the reliability of the things we seen recorded and that goes against much of what we’ve learned, experienced, and integrated into our identities. When new technologies emerge that further erode our ability to trust what’s familiar they incite this fear which we tend to attribute to the technology rather than ourselves. Phone calls are a normal part of life but they were, initially, seen as an instrument of the devil.Today, AI enjoys similar problems. Deepfakes stirred a panic when people began to see how easily a machine could swap faces in videos with startling accuracy—with numerous quality video and photos that met specific requirements. While these deepfakes rarely fooled anyone, we all got a glimpse of the near future where artificial intelligence would progress to a point where we’d fail to know the difference. That day came last month Stanford University, Princeton University, the Max Planck Institute for Informatics, and Adobe released a paper that demonstrated an incredibly simple method of editing recorded video to change spoken dialogue both visually and aurally that fooled the majority of people who saw the results. Take a look:Visit the paper’s abstract and you’ll find most of the text dedicated to ethical considerations—a common practice nowadays. AI researchers can’t do their jobs well without considering the eventual applications of their work. That includes discussing malicious use cases so people can understand how to use it for good purposes and allow them to also prepare for the problems expected to arise as well.Ethics statements can feed public panic because they indirectly act as a sort of a vague science fiction in which our fearful imaginations must fill in the blanks. When experts present the problem it’s easy to think of only the worst-case scenarios. Even when you consider the benefits, faster video editing and error correction seem like a small advantage when the negatives include fake news people will struggle to identify.We Only Lose When We Resist ProgressNevertheless, this technology will emerge regardless of any efforts to stop it. Our own history repeatedly demonstrates that any efforts to stop the progression of science will, at most, result in a brief delay. We should not want to stop people who understand and care about the ethics of what they create because that leaves others to create the same technology in the shadows. What we can’t see might seem less frightening for a while, but we have no way of preparing, understanding, or guiding these efforts when they’re invisible.While technologies like the aforementioned text-based video editor will inevitably lead both to malicious uses and more capable AI in the future, we already fall victim to similar manipulations on a daily basis. Doctored photos are nothing new and manipulative editing showcases how context can determine meaning—a technique taught in film school. AI adds another tool to the box and increases mistrust in a medium that has always been easily manipulated. This is unpleasant to experience, but ultimately a good thing.Image credit: Will SigmonWe put too much trust in our senses and the recordings we view. Reminders of this help prevent us from doing that. When Apple adds attention correction to video chats and Google actually makes a voice assistant that can make phone calls for you we will need to remember that what we see and hear may not accurately represent reality. Life doesn’t require accuracy to progress and thrive. Pretending we can observe objective reality does more harm than accepting we can’t. We don’t know everything, our purpose remains a mystery to science, and we will always make mistakes.  Our problem is not with artificial intelligence, but rather that we believe we know the full story when we only know a few details.As we enter this new era we should not fight against the inevitable technology that continues to shine a spotlight on our misplaced trust. AI continues to demonstrate the fragility of the ways we conceive of reality as a species at a very rapid pace. That kind of change hurts. We lose our footing upon realizing we had only imagined the stable ground we’ve walked upon our entire lives. We seek a new place of stability as we tumble through uncertainty because we see the solution as the problem. We may not be ready for this change, but if we fight the inevitable we will never be.Artificial intelligence will continue to erode the false comforts we enjoy, and that can be frightening, but that fear is also an opportunity. It provides us with a choice: to oppose something that scares us or attempt to understand it and use it for the benefit of humanity.Now read:Soon, Alexa Will Know When You’re About to DieNew Research Warns of ‘Normal Accident’ From AI in Modern WarfareGoogle Duplex AI Still Needs a Lot of Help From HumansTop image credit: Getty Images The Transformation of Daisy Ridley Is Turning Heads You Might Also LikePowered By ZergNet We Finally Understand Why Hollywood Dumped Christina Ricci The Real Reason Marvel Didn’t Bring Back Snipes For ‘Blade’ By Adam Dachis on July 19, 2019 at 11:02 am How Deepfakes and Other Reality-Distorting AI Can Actually Help Us Tarantino Has Written Multiple Episodes of Rick Dalton TV Series Lilly From ‘Princess Diaries’ Is 36 Now and Gorgeous Sun Baby From ‘Teletubbies’ Is 22 Now & Unrecognizably Gorgeous Saying Goodbye to Johnny Depplast_img read more

How to Create Your Own StateoftheArt Text Generation System

first_img You Might Also LikePowered By ZergNet The Tragedy of Marie Osmond Just Keeps Getting Sadder and Sadder What Natalie Portman Could Look Like as the Mighty Thor How to Create Your Own State-of-the-Art Text Generation System Lilly From ‘Princess Diaries’ Is 36 Now and Gorgeous Jason Momoa’s Marriage Just Keeps Getting Weirder and Weirder By David Cardinal on June 26, 2019 at 12:01 pm This ‘Pirates of the Caribbean’ Star is Breathtaking in Reality ‘Dark Phoenix’ Director Reveals Who Is to Blame for Massive Flop Tagged In googleartificial intelligenceextremetech explainsdevelopersHow-Toprogrammingtensorflowpythonopenaifake newsgoogle compute engineGoogle ColabGPT-2text generation Post a Comment 4 Comments Little Lucy From ‘Narnia’ is Head-Turningly Gorgeous Now at 23 Movies That Hollywood Didn’t Want to Make Facebook Twitter Linkedin Pinterest Google Plus Reddit Hacker News Flipboard Email Copy 1shares This site may earn affiliate commissions from the links on this page. Terms of use. Hardly a day goes by when there isn’t a story about fake news. It reminds me of a quote from the favorite radio newsman from my youth, “If you don’t like the news, go out and make some of your own.” OpenAI’s breakthrough language model, the 1.5 billion parameter version of GPT-2, got close enough that the group decided it was too dangerous to release publicly, at least for now. However, OpenAI has now released two smaller versions of the model, along with tools for fine-tuning them on your own text. So, without too much effort, and using dramatically less GPU time than it would take to train from scratch, you can create a tuned version of GPT-2 that will be able to generate text in the style you give it, or even start to answer questions similar to ones you train it with.What Makes GPT-2 SpecialGPT-2 (Generative Pre-Trained Transformer version 2) is based on a version of the very powerful Transformer Attention-based Neural Network. What got the researchers at OpenAI so excited about it was finding that it could address a number of language tasks without being directly trained on them. Once pre-trained with its massive corpus of Reddit data and given the proper prompts, it did a passable job of answering questions and translating languages. It certainly isn’t anything like Watson as far as semantic knowledge, but this type of unsupervised learning is particularly exciting because it removes much of the time and expense needed to label data for supervised learning.Overview of Working With GPT-2For such a powerful tool, the process of working with GPT-2 is thankfully fairly simple, as long as you are at least a little familiar with Tensorflow. Most of the tutorials I’ve found also rely on Python, so having at least a basic knowledge of programming in Python or a similar language is very helpful. Currently, OpenAI has released two pre-trained versions of GPT-2. One (117M) has 117 million parameters, while the other (345M) has 345 million. As you might expect the larger version requires more GPU memory and takes longer to train. You can train either on your CPU, but it is going to be really slow.The first step is downloading one or both of the models. Fortunately, most of the tutorials, including the ones we’ll walk you through below, have Python code to do that for you. Once downloaded, you can run the pre-trained model either to generate text automatically or in response to a prompt you provide. But there is also code that lets you build on the pre-trained model by fine-tuning it on a data source of your choice. Once you’ve tuned your model to your satisfaction, then it’s simply a matter of running it and providing suitable prompts.Working with GPT-2 On Your Local MachineThere are a number of tutorials on this, but my favorite is by Max Woolf. In fact, until the OpenAI release, I was working with his text-generating RNN, which he borrowed from for his GPT-2 work. He’s provided a full package on GitHub for downloading, tuning, and running a GPT-2 based model. You can even snag it directly as a package from PyPl. The readme walks you through the entire process, with some suggestions on how to tweak various parameters. If you happen to have a massive GPU handy, this is a great approach, but since the 345M model needs most of a 16GB GPU for training or tuning, you may need to turn to a cloud GPU.Working with GPT-2 for Free Using Google’s ColabFortunately, there is a way to use a powerful GPU in the cloud for free — Google’s Colab. It isn’t as flexible as an actual Google Compute Engine account, and you have to reload everything each session, but did I mention it’s free? In my testing, I got either a Tesla T4 or a K80 GPU when I initialized a notebook, either one of which is fast enough to train these models at a reasonable clip. The best part is that Woolf has already authored a Colab notebook that echoes the local Python code version of gpt2-simple. Much like the desktop version, you can simply follow along, or tweak parameters to experiment. There is some added complexity in getting the data in and out of Colab, but the notebook will walk you through that as well.Getting Data for Your ProjectNow that powerful language models have been released onto the web, and tutorials abound on how to use them, the hardest part of your project might be creating the dataset you want to use for tuning. If you want to replicate the experiments of others by having it generate Shakespeare or write Star Trek dialog, you can simply snag one that is online. In my case, I wanted to see how the models would do when asked to generate articles like those found on ExtremeTech. I had access to a back catalog of over 12,000 articles from the last 10 years. So I was able to put them together into a text file, and use it as the basis for fine-tuning.If you have other ambitions that include mimicking a website, scraping is certainly an alternative. There are some sophisticated services like ParseHub, but they are limited unless you pay for a commercial plan. I have found the Chrome Extension to be flexible enough for many applications, and it’s fast and free. One big cautionary note is to pay attention to Terms of Service for whatever website you’re thinking of, as well as any copyright issues. From looking at the output of various language models, they certainly aren’t taught to not plagiarize.So, Can It Do Tech Journalism?Once I had my corpus of 12,000 ExtremeTech articles, I started by trying to train the simplified GPT-2 on my desktop’s Nvidia 1080 GPU. Unfortunately, the GPU’s 8GB of RAM wasn’t enough. So I switched to training the 117M model on my 4-core i7. It wasn’t insanely terrible, but it would have taken over a week to make a real dent even with the smaller of the two models. So I switched to Colab and the 345M model. The training was much, much, faster, but needing to deal with session resets and the unpredictability of which GPU I’d get for each session was annoying.Upgrading to Google’s Compute EngineAfter that, I bit the bullet, signed up for a Google Compute Engine account, and decided to take advantage of the $300 credit Google gives new customers. If you’re not familiar with setting up a VM in the cloud it can be a bit daunting, but there are lots of online guides. It’s simplest if you start with one of the pre-configured VMs that already has Tensorflow installed. I picked a Linux version with 4 vCPUs. Even though my desktop system is Windows, the same Python code ran perfectly on both. You then need to add a GPU, which in my case took a request to Google support for permission. I assume that is because GPU-equipped machines are more expensive and less flexible than CPU-only machines, so they have some type of vetting process. It only took a couple of hours, and I was able to launch a VM with a Tesla T4. When I first logged in (using the built-in SSH) it reminded me that I needed to install Nvidia drivers for the T4, and gave me the command I needed.Next, you need is to set up a file transfer client like WinSCP, and get started working with your model. Once you upload your code and data, create a Python virtual environment (optional), and load up the needed packages, you can proceed the same way you did on your desktop. I trained my model in increments of 15,000 steps and downloaded the model checkpoints each time, so I’d have them for reference. That can be particularly important if you have a small training dataset, as too much training can cause your model to over-fit and actually get worse. So having checkpoints you can return to is valuable.Speaking of checkpoints, like the models, they’re large. So you’ll probably want to add a disk to your VM. By having the disk separate, you can always use it for other projects. The process for automatically mounting it is a bit annoying (it seems like it could be a checkbox, but it’s not). Fortunately, you only have to do it once. After I had my VM up and running with the needed code, model, and training data, I let it loose. The T4 was able to run about one step every 1.5 seconds. The VM I’d configured cost about $25/day (remember that VMs don’t turn themselves off; you need to shut them down if you don’t want to be billed, and persistent disk keeps getting billed even then).To save some money, I transferred the model checkpoints (as a .zip file) back to my desktop. I could then shut down the VM (saving a buck or two an hour), and interact with the model locally. You get the same output either way because the model and checkpoint are identical. The traditional way to evaluate the success of your training is to hold out a portion of your training data as a validation set. If the loss continues to decrease but accuracy (which you get by computing the loss when you run your model on the data you’ve held out for validation) decreases, it is likely you’ve started to over-fit your data and your model is simply “memorizing” your input and feeding it back to you. That reduces its ability to deal with new information.Here’s the Beef: Some Sample Outputs After Days of TrainingAfter experimenting on various types of prompts, I settled on feeding the model (which I’ve nicknamed The Oracle) the first sentences of actual ExtremeTech articles and seeing what it came up with. After 48 hours (106,000 steps in this case) of training on a T4, here is an example:The output of our model after two days of training on a T4 when fed the first sentence of Ryan Whitwam’s Titan article. Obviously, it’s not going to fool anyone, but the model is starting to do a decent job of linking similar concepts together at this point.The more information the model has about a topic, the more it starts to generate plausible text. We write about Windows Update a lot, so I figured I’d let the model give it a try:The model’s response to a prompt about Windows Update after a couple of days of training.With something as subjective as text generation, it is hard to know how far to go with training a model. That’s particularly true because each time a prompt is submitted, you’ll get a different response. If you want to get some plausible or amusing answers, your best bet is to generate several samples for each prompt and look through them yourself. In the case of the Windows Update prompt, we fed the model the same prompt after another few hours of training, and it looked like the extra work might have been helpful:After another few hours of training, here is the best of the samples when given the same prompt about Microsoft Windows.Here’s Why Unsupervised Models are So CoolI was impressed, but not blown away, by the raw predictive performance of GPT-2 (at least the public version) compared with simpler solutions like textgenrnn. What I didn’t catch on to until later was the versatility. GPT-2 is general purpose enough that it can address a wide variety of use cases. For example, if you give it pairs of French and English sentences as a prompt, followed by only a French sentence, it does a plausible job of generating translations. Or if you give it question-and-answer pairs, followed by a question, it does a decent job of coming up with a plausible answer. If you generate some interesting text or articles, please consider sharing, as this is definitely a learning experience for all of us.Now Read:Google Fed a Language Algorithm Math Equations. It Learned How to Solve New OnesIBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic BrainNvidia’s vision for deep learning AI: Is there anything a computer can’t do? 4 Commentslast_img read more

Leading CMOs Gain 4 for Every Dollar Invested in Marketing Measurement

first_imgLeading CMOs Gain $4 for Every Dollar Invested in Marketing Measurement PRNewswireApril 17, 2019, 6:23 pmApril 17, 2019 2019 Study Commissioned by Equifax Finds Companies with Highest Measurement Maturity Generate More Than $73 Million in Additional Annual Revenue, Among Other Business BenefitsEquifax Data-driven Marketing (DDM), the marketing data, analytics and technology solutions capability of Equifax Inc. (EFX), presented findings from a commissioned marketing measurement study conducted by Forrester Consulting on behalf of Equifax. The study found that marketers with high measurement maturity earn approximately $4 for every dollar they spend on marketing measurement.In Leader Of The Pack: How Holistic Marketing Measurement Drives Business Success, Forrester explored the state of marketing measurement in a survey of 300 marketing, data and analytics decision-makers at US. enterprises across more than 10 industries that have active marketing performance strategies.A small group of marketers, 51 companies (17%), have high measurement maturity, enabling them to produce accurate, unified and actionable insights applied strategically as well as tactically. These companies are considered leaders, and they stand apart from peers in critical performance metrics. Forrester found that on average, marketing leaders:Earn approximately $4 for every dollar they spend on marketing measurementSecure more than $73 million in additional revenue; about a 3 percent increase in revenue overallGenerate nearly 3 percent more leadsDecrease their marketing spend by over $2 millionIncrease their Net Promoter Score by more than 7 pointsBetter match offline sales to their digital marketing effortsMarketing Technology News: Percolate Named a Leader in 2019 Gartner Magic Quadrant for Content Marketing Platforms“This new data shows the incredible impact of successful marketing measurement, creating significantly higher business benefits that impact a company’s bottom line. While most companies still struggle with holistic measurement maturity, leaders pull ahead of the pack with their efficiency in managing data, better insights and less wasted spend, resulting in more targeted campaigns and better acquisition conversions. Their CMOs are focused on campaign improvement, they own their data, and they leverage it more effectively,” said Mykolas Rambus, General Manager, Equifax DDM. “Our OptimaHub marketing attribution solution is precisely designed as an end-to-end solution to bring marketers deeper intelligence around their campaigns and the value they bring to their business.”Other study findings:Leaders use more measurement techniques. While all marketing leaders use unified measurement, they also more heavily use tools such as mix modeling and advanced digital attribution. Still, less than half of companies surveyed (43%) use unified measurement, or what Forrester calls the “most mature approach to measurement,” leading to varied levels of accuracy for firms.Leaders’ No. 1 goal is customer insights. Marketing measurement leaders do a better job focusing on optimizing and getting the most out of their campaigns. Leaders optimize the targeting of campaigns (65%), manage more contextual (54%) or personalized (48%) marketing and gain better customer insights (52%). Their peers focus on more tactical efforts such as customer targeting, forecasting and planning.Leaders are better at data integration. Successful data integration provides a better understanding of the customer journey and is key to campaign success for leaders, who integrate online and offline data at a far higher rate. Around 63% of leaders have integrated their digital and offline channels, while 87% have integrated their digital channels alone.Leaders leverage more data. A full 75% of leaders use both customer ID and regular data, vs. around 50% for their peers. Leaders use far more demographic data (58%) and leverage more types of data including CLV, digital media performance and direct response. These investments in marketing measurement drive across-the-board efficiency improvements as teams are more effective with proper insights.Marketing measurement is ripe for investment. Firms are clearly seeing the benefit of increasing the percentage of their budget allocated to measurement. By 2020, marketers expect measurement will make up 10% of their overall marketing budget, up from 5% two years ago—doubling over four years.Marketing Technology News: Camilyo Launches SmartSite, an AI-powered Website Creation PlatformEquifax highlighted findings of the study at the Forrester Consumer Marketing 2019 conference in April in New York. Download the study for more detailed information on the research results.The Equifax DDM capability provides unique Equifax data insights about economic capacity as well as functional solutions for marketers such as identity resolution, direct and digital engagement, and attribution and performance analytics to help brands market with precision, manage risk and drive superior returns. OptimaHub is the Equifax marketing measurement capability that uses a sophisticated measurement solution paired with unique household economic insights from Equifax to provide marketers with robust and actionable intelligence not available elsewhere.Equifax DDM brings together data assets, analytics, technology, and integrated marketing capabilities to solve key challenges for marketing executives and helps more than 300 customers across the financial, insurance, telecommunications, travel and other industries.Marketing Technology News: Scala Announces the Release of Scala Enterprise 11.07 data-driven marketingEquifaxEquifax DDMmarketing dataMarketing TechnologyNewsTechnology Previous ArticleBizzabo Raises $27 Million to Make Professional Events More Personalized and Data-PoweredNext ArticleIs Safari Safer than Chrome? New Data Report Reveals Insights on Online Journey Hijacking and its Effects on eCommercelast_img read more

U°OS Network — a Universal Portable Reputation System — Launches Beta

first_img Betadecentralized financial transactionsMarketing TechnologyNCDawareRankNewsSocial MediaU°OS Network Previous ArticlePeerLogix OTT Data Insights: Game of Thrones New Episodes Provide Significant and Progressive Boost to Streaming LibraryNext Article98.55% of People use at Least 4 Social Media Platforms Daily U°Os AIMS to Become the Digital Reputation Standard on the Web, Making Network Economies More ProductiveU°OS, a blockchain protocol that translates economic and social actions into reputation, proposes a universal portable reputation system and aims to become the standard on the Web. Adjustable to any e-commerce, social media, review platforms, and any network in general, the U°OS Network is underpinned by a unique Delegated-Proof-of-Importance consensus algorithm, developed in-house, and based on NCDawareRank. After being in research and development for over a year, U°OS is launching the public beta on May 15, 2019 with the top EOS block producers on board.The emergence of distributed protocols has taken the first step in creating the Web 3.0. U°OS are moving towards a true peer-to-peer interaction without an intermediary that holds the keys to their digital selfhood. While cryptocurrencies enable decentralized financial transactions, infrastructure protocols let them run decentralized apps on the chain, the truly peer-to-peer social communications, reputation, and identity systems have just started showing their green shoots.“Centralized services have a biased incentive structure for interpreting reputation data and disproportionate power to modify it. We are not controlling our reputation — it is confined to a single platform and context, thus cannot be detached and transferred to another place. This translates into a time-consuming task to understand who is truly behind the digital avatar of an individual or an organization. The absence of a universal and distributed reputation system is the reason why the decision-making process is slow and costly” — John Sneisen, two times best-selling author, monetary history expert, and U°OS Advisor.Marketing Technology News: Cloudinary Identifies Opportunities to Raise Visual Storytelling Impact in its Inaugural State of Visual Media ReportThe U°OS reputation model enables people to interact in the digital environment as sovereign individuals, solving the problem of not having complete and unequivocal ownership of one’s digital selfhood and network influence. U°OS allows digital entities — individuals or organizations — to have a unified reputation for a natural decision-making process about the trustworthiness of the peer. The U°OS reputation is multi-context, transparent, and distributed.The key characteristics of the U°OS reputation system are:Transparency — the blockchain-recorded data is public and increases trust to a digital entity;Universality — the system can be integrated into any existing application via API and OAuth;Portability — algorithmic operation on the public ledger without belonging to any centralized authority;Privacy-friendliness — users are not required to reveal their identity to use the system;Marketing Technology News: BlueVenn and London Research Release Customer Data Platform Maturity Model“Despite being seemingly complex, U°OS is already neatly packed into user-friendly interfaces and APIs to make the user experience as smooth as possible for casual users as well as developers” — Andrew Perepelitsa, U°OS Head of Developer Relations.The U°OS beta launches with U°Community — a decentralized application to run Decentralized Autonomous Communities (DAC) and Decentralized Autonomous Communities (DAO). The beta will also see the very first plug and play case: integration with the U°Today, a news, research and educational agency covering the blockchain industry and new generation technology.Marketing Technology News: Three Quarters of Retailers Believe Their Model Needs to Change for Them to Remain Relevant U°OS Network — a Universal Portable Reputation System — Launches Beta PRNewswireMay 16, 2019, 2:03 pmMay 16, 2019 last_img read more

Comm100 Launches Agent Assist to Boost Agent Performance and Customer Satisfaction

first_imgComm100 Launches Agent Assist to Boost Agent Performance and Customer Satisfaction MTS Staff WriterJune 26, 2019, 10:15 pmJune 26, 2019 Comm100, a global provider of omnichannel customer experience solutions, announced the launch of Agent Assist, an AI-powered virtual assistant that helps agents respond to customer queries more quickly, accurately and confidently than ever before. Agent Assist significantly reduces the time agents spend hunting for answers, resulting in faster resolution, higher capacity and more time to focus on more complex or sensitive customer inquiries—all leading to higher customer satisfaction scores.As AI matures, many companies are weighing the impact this new technology can have on their customers’ experience with their brand, and agent-facing AI is at the heart of that conversation. Agent Assist helps agents become smarter and more efficient, monitoring chat conversations and pulling relevant information from company data in real time so that agents can access the answers they need faster and more accurately.Marketing Technology News: Leading Associations Turn to ON24 for Member Engagement and Revenue GrowthKey features include:Enhanced agent capabilities: Agent Assist provides real-time answer suggestions to inbound live chat queries from Comm100 canned messages, chatbot intents and knowledge base articles, and gives agents the option to edit those responses before they’re sent. It also streamlines common service requests, like order tracking and password resets. Many of those requests require agents to gather more information, causing them to spend a long time on a routine action; using Agent Assist, agents can invoke a chatbot workflow to gather the details and take back control of the conversation once that’s complete. Additionally, if a chatbot is integrated with core business systems, agents can use it to access those systems and instantly deliver more personalized answers.Straightforward system configuration: With a few clicks, an administrator can train Agent Assist to recognize industry-specific synonyms, select or deselect the resources Agent Assist should use and control how tightly Agent Assist interprets messages with tunable sensitivity scoring.Intelligent learning and easy maintenance: Comm100’s machine learning algorithm observes agent behavior to improve answer suggestions. If Agent Assist cannot help in a certain situation, agents can mark those unrecognized questions to be placed in a learning portal for easy content management and coverage.Marketing Technology News: Whitebox Raises $5 Million in Series A Funding to Accelerate eCommerce “Factory Floor to Front Door” Tech Platform“Many companies want to deploy AI in their contact centers, but might not know where to begin, or are concerned about how their customers will react,” said Jeff Epstein, VP of Product at Comm100. “Because its suggestions don’t get pushed to customers without approval, Agent Assist provides a low-risk way to leverage the promise of AI to make contact centers more effective. By putting AI to work for contact center staff, organizations can create ‘Super Agents’ with more knowledge and more capacity to handle any question a customer throws at them—making them the true heroes of customer experience. Agent Assist also helps new agents get up to speed more quickly, reducing their learning curve and making them more productive faster than previously possible.”Marketing Technology News: Insite Software Announces Major New Enhancements for InsiteCommerce Agent AssistAIComm100Customer Experience SolutionsMarketing TechnologyNews Previous ArticleInnovid Continues Rapid Global Expansion with New Tokyo Office and Appointment of Toichiro Watanabe as Regional Director, JapanNext ArticleHow to Increase Your App Adoption Rate with the Right Customer Training Softwarelast_img read more

MarTech Interview with Joe Sacchetti Head of Channel Partner Program at KeyedIn

first_img AIcrmERPinterviewsJoe SacchettiKeyedInMarTech InterviewPPMSix Sigma Previous ArticleNTT DATA Presents the Future of Digital AccelerationNext ArticleAmazon Prime Day – Retailers, Are You Prepared? ” AI threatens to help us digest new info in real-time so we don’t have to use tools to figure it out in weeks.” Tell us about your role and journey into Technology. What inspired you to start at KeyedIn?Funny Story? Here’s my path: US Army Ranger in Iraq – Big Pharma for a decade, then Technology for the past 20. Add in a recent business degree at MIT and I was hungry to build a world-class channel from scratch.What is KeyedIn and how does it fit into a modern Enterprise Technology stack?KeyedIn is Cloud-based Project and Portfolio Management software that combines bottom-up usability with top-down reporting power. Beyond just tracking tasks, KeyedIn allows companies to prioritize the most valuable portfolio of projects, assess resource capacity, track milestones, and benefits, and give full visibility to leaders up and down the management chain.Which businesses are fastest to the adoption of IT Resource Management platforms (ITRM)? I have found that any business or industry vertical looking for the competitive advantage through business practices like Six Sigma, Lean and Agile management techniques benefit the most from ITRMHow does it impact Marketing and Sales operations?For the end-user, forecasting now can become a science and not a dart heading towards a dartboard!Tell us more about your recent announcement on Cloud-based PPM Solution.KeyedIn recently launched a new version of its cloud-based project and portfolio management (PPM) solution with improved Gantt chart performance, enabling users to plan and execute projects with more than 2,000 tasks with superior performance. Designed for enterprise organizations, the new version of KeyedIn Projects delivers 10x improved Gantt chart performance, allowing it to handle 10 times the number of tasks.Tell us how you identified your Global Partners. What are the basic requirements to be a part of the Global Partner Program?Choosing your partners is one of the most important secrets to success. I devoted my research at MIT to just this and you can check it out for yourself if you like. Choosing just a select few partners with deep domain knowledge of the PM space was the secret sauce for us, and the idea carries across to other industries and other partners. Depending on the software or hardware and geography that will be covered, a wide net of aggressive sales-oriented partners may be the ticket for you. I love to consult on the topic, and I’m happy to ‘talk Channel’ anytime with companies looking to launch one. can businesses maximize their ROI from investing in the PPM platform? Any business running multiple projects at a given time, and all the successful ones do, need to have a complete look at who’s doing what/when/ for how long/and for how much? Times that by 10 projects concurrently running at different stages, and we’ve just launched a new version of KeyedIn Projects that is light years beyond keeping track of it all with spreadsheets. Our software has additional tracking and analytics in real-time, and that can help vault businesses straight to the top. Gotta have it.Tell us about your technology integrations with key Marketing Technology platforms such as Contacts, Contracts, Email and Customer Service.If a PPM and PSA solution can integrate into your existing CRM, expense, and ERP billing system, that would be very cool. Some don’t. KeyedIn does. That makes it attractive as a partner.Which Marketing and Sales Automation tools and technologies do you currently use?We not only integrate but of course use the platforms we work with best. Salesforce CRM and Intacct financial ERP. Outreach is a favorite too and Impartner gives us our great partner portal called ParterUp!What are your predictions on the most impactful disruptions in AI and Digital Asset Management technology for 2019-2020?As AI continues to blaze trails and open up previously closed doors, KeyedIn software will be able to track and analyze the changes/outcomes, and complete new projects even faster.How do you prepare for an AI-centric world as a Technology Leader?Folks that see the future coming are renegades! AI threatens to help us digest new info in real-time so we don’t have to use tools to figure it out in weeks. We can be as ready as the CTO wants to be in order to instantly evaluate projects that will succeed over projects that will fail.How do you inspire your people to work with technology? As an Army Ranger who led in troops in combat, creating a collaborative work environment with the greatest respect for the experts that work for you is a great way for all to stay inspired, including mostly me! word that best describes how you work.EnlightenedWhat apps/software/tools can’t you live without?LinkedIn, my mobile expenses appWhat’s your smartest work-related shortcut or productivity hack?Say YES, and then do it NOW.What are you currently reading?I am reading the Go Giver – a great pick my Chief Strategy Officer Robbie Reid mailed to me to chat over. I just finished Winner’s Dream by SAP’s Bill McDermott, and Compound Effect is next. I am also publishing my first book – on combat leadership  – later this summer called Leading RangersWhat’s the best advice you’ve ever received?I saw Jimmy Valvano ( NCAA Champ coach fighting cancer) say, Don’t give up. Don’t ever give up. A couple of times in the desert I was tried close to the breaking point, but I remember what Jimmy V said.Something you do better than others – the secret of your success?I’m not smarter or taller or faster. But I never stop. I love the action. I love to learn, travel, speak languages, and I love to LISTEN.Tag the one person in the industry whose answers to these questions you would love to read:Tommy Lasorda – He’s not in business, but what an everyman leader he was. They threw him a ragtag bunch of new players and guys on the way back down in the Sydney Olympics with 3 weeks of training, and he took them to the Gold.Thank you, Joe! That was fun and hope to see you back on MarTech Series soon. About JoeAbout KeyedInAbout Joe Joe is an Experienced Global Channel and Sales Leader with a demonstrated history of working in the computer software industry. Combat-decorated US Army Ranger skilled in leadership and motivation of direct reports to success. Strong MIT-trained sales professional with spectacular presentation ability to large groups, and possess magnetic skills of reaching diverse audiences. Global in outlook, He has now been to over 120 countries, read and write Spanish and Portuguese, and am comfortable in most of the languages in Western Europe. KeyedIn Solutions helps organizations simplify business processes, improve performance and drive results through innovative SaaS business solutions. These applications were developed in the Cloud for the Cloud, to capitalize on the exclusive benefit only the Cloud can offer.center_img MarTech Interview Series The MTS Martech Interview Series is a fun Q&A style chat which we really enjoy doing with martech leaders. With inspiration from Lifehacker’s How I work interviews, the MarTech Series Interviews follows a two part format On Marketing Technology, and This Is How I Work. The format was chosen because when we decided to start an interview series with the biggest and brightest minds in martech – we wanted to get insight into two areas … one – their ideas on marketing tech and two – insights into the philosophy and methods that make these leaders tick. MarTech Interview with Joe Sacchetti, Head of Channel Partner Program at KeyedIn Sudipto GhoshJuly 15, 2019, 1:30 pmJuly 15, 2019 About KeyedInlast_img read more

Instagram Lets You Restrict People on Your Profile And is Using Artificial

first_img aibullyfeatureInstagram First Published: July 9, 2019, 1:11 PM IST To curb online bullying, Facebook-owned Instagram has announced a unique feature where a user can “shadow ban” or “restrict” a bully from commenting on his or her posts. Once you “restrict” someone, comments on your posts from that person will only be visible to that person. You can choose to make a restricted person’s comments visible to others by approving their comments.”We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” said Adam Mosseri, Head of Instagram. “We wanted to create a feature that allows people to control their Instagram experience, without notifying someone, who may be targeting them”. In this upcoming feature, “restricted” people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages. Instagram is using Artificial Intelligence (AI) to detect bullying and other types of harmful content in comments, photos and videos.”We have started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it’s posted,” informed Mosseri. This move gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.”From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” said Mosseri. “We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” he added.last_img read more

Anurag Kashyap Announces New Company New Film in a Cryptic Tweet

first_imgFilmmaker Anurag Kashyap on Thursday announced that he is starting a new company and working on a new film.”New company, new film, new beginnings,” he tweeted on Thursday, without elaborating any further. Notably, Kashyap was one of the founders of the production house Phantom Films along with Vikramaditya Motwane, Madhu Mantena and Vikas Bahl. The banner was dissolved last year after seven years of partnership and some memorable films when Bahl got embroiled in the #MeToo controversy and the involvement of the other partners was put under scanner.Founded in 2011, Phantom Films produced some of the most talked-about films of the last decade, like Queen, Masaan, Lootera and Udta Punjab.New company, New film, New beginnings…— Anurag Kashyap (@anuragkashyap72) June 13, 2019Known for making dark and edgy films like Gangs of Wasseypur, Black Friday, Raman Raghav, Dev D and Gulaal, Kashyap’s last directorial was Manmarziyaan, a romantic film starring Taapsee Pannu, Vicky Kaushal and Abhishek Bachchan.Recently he filed an FIR against a social media user who threatened to rape his daughter over his anti-establishment views. The filmmaker was trolled for drawing Prime Minister Narendra Modi’s attention to the nefarious post and asking for help. Responding to it, Kashyap said there should be a law against such toxic trolling, or someone in power should condemn such incidents “in the harshest words” to send across a strong message.(With News18 inputs)Follow @News18Movies for more. Anurag KashyapAnurag Kashyap filmsAnurag Kashyap production houseAnurag kashyap Twitter First Published: June 13, 2019, 5:19 PM ISTlast_img read more

Hua Toh Hua is Mantra of Arrogant Congress Says PM Modi at

first_imgGhazipur: Prime Minister Narendra Modi on Saturday brought up Congress leader Sam Pitroda’s “hua to hua” (it happened, so what?) remark on anti-Sikh riots in 1984, saying it reflected that party’s “arrogance”.Modi accused the Congress government in Rajasthan of trying to suppress a Dalit woman’s gangrape keeping the Lok Sabha elections in mind. The crime in Alwar provoked protests across the state after the woman’s husband said she was raped on April 26 and the police informed on April 30, but the FIR was filed on May 7. “The Congress which is in power in Rajasthan has tried to suppress the news of the gangrape of a Dalit woman due to the polls in that state,” Modi said at an election meeting in Ghazipur in eastern UP. He had alleged the police did not act quickly because of the elections. Rajasthan voted in the Lok Sabha polls in two phases on April 29 and May 6.The PM said that the Congress cannot give ‘nyay’ (justice) to the daughters of the country, indirectly invoking the name of the income support scheme that the opposition party has promised to launch if voted to power.Earlier in Sonebhadra, PM Modi came down heavily on the opposition alliance in Uttar Pradesh for raising questions on his caste, asserting that he belongs to the caste of all poor countrymen.Addressing an election rally here, Modi continued to attack the Congress over its leader Sam Pitroda’s hua to hua (It happened, so what?) remark on the anti-Sikh riots in 1984.National security also figured prominently. Modi accused a previous coalition government of weakening the intelligence agencies and recalled that the Pokhran nuclear tests took place on this day 21 years back.”They have destroyed Uttar Pradesh and now the SP and the BJP have come together to save themselves from destruction, he said, calling their alliance mahamilawati or adulterated. They have started a new thing about my caste, he said, in an apparent reference to Bahujan Samaj Party leader Mayawati’s jibe that he is a farzi backward — a fake OBC leader. I want to tell them that Modi belongs to just one caste – whatever caste the poor belong to, I belong to that caste,” he said, listing the schemes that his government has launched for them.He claimed that the country’s intelligence agencies suffered when a third front government, which included the Samajwadi Party, was in power at the Centre.”Our intelligence agencies were weakened by an earlier ‘mahamilawati’ government, but the (Atal Bihari) Vajpayee government set that right,” he said.”Many people connected with our security and intelligence agencies have written a lot about this. They have written how they had made the intelligence network hollow and the country had to bear its consequences for long, he said.What the third front government did is in no way less than a crime,” he said. “Whenever there is a ‘mahamilawati’ government in the country, it has threatened national security, he said.The prime minister hailed scientists for the successful Pokhran nuclear tests this day in 1998.”Twenty-one years back this day, India successfully carried out nuclear tests, Operation Shakti. I salute the scientists who brought laurels to the country with their hard work. This historic event in 1998 proves what strong political will can do for national security,” he said. Referring to the Manmohan Singh term, he claimed the country got a weak government run by remote control after the one headed by Vajpayee.He said that government brought a bad name to the country.”So many scams took place but the Congress and its people had no remorse as their way of thinking is ‘hua to hua’,” he said making a fresh attack on the Congress for its leader Sam Pitroda’s remarks. “It shows the character and mentality of the party, he said. The hua to hua’ remark reflects the arrogance of the Congress,” he said.Modi cited the Balakot air strikes against Pakistan as an example of what a bold government can accomplish. “This is the new India. Now India barges into the hideouts of terrorists and kills them,” he added. ​ Balakothua to huaMahamilawatiNarendra Modi First Published: May 11, 2019, 6:19 PM ISTlast_img read more

EC Rejects Ashok Lavasas Demand Will Not Disclose Dissent in Cases of

first_img Ashok Lavasaelection commissionLok Sabha elections 2019Model Code of Conduct First Published: May 21, 2019, 7:01 PM IST New Delhi: The Election Commission on Tuesday rejected with a majority vote election commissioner Ashok Lavasa’s demand that dissent notes should be recorded in its orders on model code violations, days after the simmering tension within the poll body over the issue came out in the open. The ‘full commission’ of the panel, comprising Chief Election Commissioner Sunil Arora and two other members -Lavasa and Sushil Chandra- deliberated on the contentious issue, after which the Commission said that dissent notes and minority views would remain part of records but would not be part of its order. “In the meeting of the Election Commission held today regarding the issue of MCC (Model Code of Conduct), it was, inter alia, decided that proceedings of the commission’s meetings would be drawn, including the views of all the commission members,” the Commission said in a statement after the meeting, which lasted for more than two hours.”Thereafter, formal instructions to this effect would be issued in consonance with extant laws/rules, etc,” it further said.Explaining the order, a commission official said the dissent notes and minority views would remain part of records of the poll panel.In Tuesday’s meeting, Lavasa stuck to his ground, pressing for his demand to include dissenting views in the orders.When contacted, Lavasa said there will be clarity, once the minutes of the meeting are drawn. “Till the time there is clarity on the reasoning of the views, it is premature to say (anything),” Lavasa said and asserted “my view is very clear that transparency is important … minority view should be included and there should be time-bound procedures.”Lavasa had dissented on a series of clean chits given by the Commission to Prime Minister Narendra Modi and BJP president Amit Shah on their speeches during the election campaign.As his demand to record his dissent notes in EC’s orders was not met, Lavasa recused himself from cases relating to relating to violations of model code of conduct.In a strongly-worded letter to Arora on May 4, Lavasa is learnt to have said that he is being forced to stay away from the meetings of the full commission since minority decisions are not being recorded.Since copies of the orders are sent to the complainant and respondents, they become public even if EC does not share it with media.The EC had maintained that the dissent notes cannot be made part of the order as the poll code violation cases are not quasi-judicial in nature and that they are not signed by the chief election commissioner (CEC) and fellow commissioners.”They are like executive orders. They are summary decisions where decision is taken by the EC without hearing out counsels of the two parties. The orders are brief which are not signed by the three commissioners,” explained an official.Such orders are usually signed by the concerned principal secretary or secretary of the EC, the official said.Lavasa had dissented in as many as 11 EC decisions involving complaints against Modi and Shah for alleged MCC violation and where they were given a clean chit. An official said the EC is likely to come out with a circular clearly outlining the procedure relating to complaints of poll code violation.”Status quo will be maintained. Dissent will not be made public but would form part of EC records,” explained an official.As per the law governing the functioning of the EC, efforts should be to have unanimity but in cases of dissent a majority (2:1) view prevails.In his May 4 letter, Lavasa is learnt to have also said that his participation in EC meeting is “meaningless” as his dissent remained unrecorded. He had said that his notes on the need for transparency have not been responded to because of which he has decided to stay away from meetings on model code related complaints.Sources said Lavasa had written to the CEC about the issue on at least three occasions.last_img read more

Arjun Kapoor Trolls Katrina Kaif in Jest Anushka Sharma Virat Kohli Spend

first_imgIn a nail-biting match, the Kiwis knocked out the Men in Blue from ICC World Cup 2019 as they emerged victorious in India vs New Zealand semi-final held on Wednesday. While some fans are still mourning Team India’s defeat, skipper Virat Kohli has moved past the world cup and is making the most of his stay in London.Katrina Kaif on Saturday had set the temperatures soaring with a new Instagram post, posing in a blue swimsuit beside a pillar on a Mexican beach. The post has been ‘liked’ over a million times already, but her co-star Arjun Kapoor decided to troll the actress in jest. Here are the entertainment news highlights of the day.Moon landing has always seemed like the stuff of movies, and filmmakers both in India and abroad have been inspired to spin their stories around the phenomena more than once. As the countdown for the launch of Chandrayaan-2, India’s most ambitious space mission that aims to place a rover on the moon, goes on, here are some moon landing inspired films that have greatly expanded the realm of science fiction.Read: Chandrayaan-2: Five Times the Fascination Around Moon Landings Inspired MoviesTwo days after losing the World Cup Semi-finals to New Zealand, Indian skipper Virat Kohli was spotted having a good time in the city along with his wife, actress Anushka Sharma. The pictures of the same are making rounds on the Internet. The couple can be seen in their casual best in a relaxed mood in the pictures.Read: Anushka Sharma, Virat Kohli’s Day Out in London, See PicsThe week saw a number of trailers that created quite a stir on social media. Be it Akshay Kumar preparing for Mission Mangal with Vidya Balan, Sonakshi Sinha, Taapsee Pannu, Nithya Menon, Kirti Kulhari and Sharman Joshi or Angelina Joile reprising the evil character of the horned fairy, Maleficent, the trailers/teasers made quite a buzz on the Internet.Read: Watch Where You Going: Arjun Kapoor Trolls Katrina Kaif as She Poses Beside PillarKatrina Kaif’s blue swimsuit photo on Instagram elicited a hilarious response from actor Arjun Kapoor on Instagram. As the actress posed beside a pillar, the actor commented, “Watch where u goin girl !!! Hope u didn’t walk into the pillar while posing.” Read: Watch Where You Going: Arjun Kapoor Trolls Katrina Kaif as She Poses Beside PillarPriyanka Chopra and Nick Jonas never miss out on opportunities to make each other and their family members feel special. Recently, Nick’s mother and Priyanka’s mother-in-law, Denise Jonas, turned a year older and the actress was prepared to wish her with a special birthday post.Read: Priyanka Chopra Has an Adorable Wish for Nick Jonas’ Mother on Her BirthdayFollow @News18Movies for more anushka sharma virat kohliAnushka Sharma Virat Kohli Londonanushka sharma virat kohli photosarjun kapoor First Published: July 14, 2019, 7:49 PM ISTlast_img read more

Over 500 Militants Killed in Last 5 Years in Northeast Substantial Improvement

first_imgGuwahati: A total of 510 extremists, including 223 in Assam, were eliminated by the security forces in the northeast in the past five years — from 2014 till May 31, 2019 — the Centre informed the Lok Sabha on Tuesday. As many as 382 civilians and 113 security forces personnel also lost their lives during this period.“The security situation in the northeastern states has improved substantially during the last five years. In 2018, the number of insurgency-related incidents decreased by 66% (2013 – 732, 2018 – 252), civilian deaths by 79% (2013 – 107, 2018 – 23) and security forces casualties by 23% (2013 – 18, 2018 – 14),” said Minister of State for Home Affairs G Kishan Reddy in reply to a question from BJP MP Vishnu Datt Sharma. The minister said 181 militants were killed in 2014, 149 in 2015, 87 in 2016, 57 in 2017, 34 in 2018 and two militants killed in counter-insurgency operations this year till May 31.As far as state-wise data is concerned, 42 militants were killed in Arunachal Pradesh in the last five years and nine security personnel lost their lives in 237 incidents that also resulted in 18 civilian casualties.In 938 insurgency-related incidents in Assam since 2014, 12 security personnel and 219 civilians lost their lives (in 2014 alone, 168 civilian casualties were recorded in 246 incidents).In Manipur, 107 extremists were killed in 1,084 insurgency-related incidents in the last five years. A total of 77 civilians and 58 jawans lost their lives during this period.As many as 84 militants, 51 civilians and 14 security personnel were killed in Meghalaya, while 54 militants were killed in Nagaland since 2014, which also resulted in the death of 15 security personnel and 16 civilians.In Tripura, one civilian and two jawans were killed in nine incidents during 2014-15. Only eight insurgency-related incidents and zero civilian casualty were recorded in Mizoram in the last five years, in which three security personnel lost their lives.Reddy said, “The Central government is following a multi-pronged strategy to deal with the security situation in the northeast which comprises security-related measures and development and rehabilitation initiatives.”“The Centre is supplementing the efforts of the state government through various measures such as deployment of Central Armed Police Forces, financial assistance for strengthening the state police forces and intelligence agencies, raising of India Reserve Battalions and rehabilitation of surrendered militants, reimbursement of security related expenditure to state governments, banning insurgent groups as ‘unlawful association’ and ‘terrorists organisation’ under Unlawful Activities (Prevention) Act 1967, declaring specific areas/states as disturbed areas for the purpose of Armed Forces Special Powers Act (1958) and issuing notifications for Unified Command Structure,” he added. Assamcivilian casualtyG Kishan Reddymilitancy in northeast First Published: July 9, 2019, 7:18 PM IST | Edited by: Sohini Goswamilast_img read more