Higher Education

Most students borrow for college, but are they financially literate?

Catherine Montalto, The Ohio State University and Anne McDaniel, The Ohio State University

August is here, and many families are preparing their children for the next academic challenge – a college education.

By and large, a college degree is viewed as an important credential for gainful employment and professional success. At the same time, college is costly, and college financing strategies are complex.

Students and their families use multiple sources to finance college expenses. Most students borrow for their education. Three out of five college students depend on student loans to fund their education.

But, do students know the ABCs of financial literacy?

College finance options

The college process begins with estimating the full cost of college attendance. This includes tuition, housing and living expenses, such as food, books, cellphone plans and transportation.

The next step is to identify all resources available to pay college expenses, including the expected family contribution, scholarships and grants, college savings and wages from employment – if students plan to work.

Once college costs and available resources are carefully estimated, any shortfall in resources informs the need for borrowing. Scholarships and grants are awarded without strings attached. However, student loans come with an obligation to repay the borrowed amount once the recipient is no longer enrolled full-time.

Guidelines for responsible student loan use recommend minimizing the loan amount in order to have less debt to be repaid.

Decisions made by college students and their families regarding loans have direct and significant consequences during adulthood.

The inability to manage student loan repayment along with other financial obligations (i.e., housing, food, utilities, transportation) has been shown to impact career choice, home ownership, marriage, additional education, financial health and overall quality of life.

So, how do students decide the amount to borrow? What rules of thumb or strategies are used? How is use of these strategies related to financial knowledge?

How students make borrowing decisions

We lead the Study on Collegiate Financial Wellness (SCFW), which surveys a random sample of undergraduate students in order to understand their financial behaviors, decisions and wellness. Data from our study provide insights to these questions.

The 2014 SCFW study, with the most recent information from nearly 19,000 college students studying at 51 public and private four-year and two-year institutions, found that the majority of college students with student loans use one or more strategies to minimize the amount borrowed.

How are students making borrowing decisions? Application image via www.shutterstock.com

For example, data from our study showed over half of student loan users tried to borrow as little as possible (52 percent).

Additionally, 38 percent considered the total amount of debt that they expected to graduate with. Thirty-three percent considered the amount that had borrowed in the past when deciding how much to borrow for the school year.

But about 28 percent, almost three out of 10 students, reported borrowing the maximum amount available in their package. And about 17 percent of student loan users borrowed the maximum available without also employing a strategy to minimize overall borrowing.

Low financial knowledge

The next question is, how well are students prepared to make these important decisions?

The SCFW included two financial knowledge questions to test whether respondents could understand the concepts of interest and inflation and had basic financial numeracy. These questions assess basic concepts of financial literacy – the knowledge and skill needed to manage financial resources effectively.

Nearly 80 percent of the college student respondents answered the interest rate question correctly. But only 59 percent answered the inflation question correctly. Just over half of the college students (53 percent) answered both questions correctly.

Students who answered the interest rate question incorrectly don’t understand that interest is earned not only on money deposited in a savings account, but also on previously earned interest – a feature known as compounding – while students who answered the inflation question incorrectly don’t understand that rising inflation reduces the buying power of money. Interest and inflation both influence how much our hard-earned money can buy.

These results are similar to previous research conducted in 2007-08 with 23-28 years old young adults.

In that study, the percentage of young adults answering correctly was 79 percent for the interest rate question, 54 percent for the inflation question, and 46 percent for both questions.

Knowledge influences borrowing

Some colleges provide either workshops or longer term courses on financial education, but the percentage of college students who receive financial education remains low.

When students know more, they save more. 401(K) 2012, CC BY-SA

Only one-quarter of the SCFW respondents completed a financial education course in college. Those that did were significantly more likely to answer both financial knowledge questions correctly (58 percent vs 51 percent). This difference is too large to attribute to chance alone, and suggests that financial education increases financial knowledge.

Using data from the SCFW on 7,180 students at four-year colleges, we wanted to see if financial knowledge and financial education were associated with strategies used to make borrowing decisions.

We controlled for many factors known to affect student loan borrowing, including student age, sex, race/ethnicity and socioeconomic status. We found that with higher financial knowledge, students were more likely to borrow what they believed they needed.

We found they budgeted and borrowed as little as possible, when they had financial knowledge.

Knowledge is power

While a college degree certainly pays off in the long run, the payoff can take longer if students have loans to repay.

This fall as students enter or return to college, it is important that they make thoughtful decisions about financing their education. Families making decisions about paying for college may also want to have discussions about how much students understand about finances or look for opportunities to take financial education workshops.

As the saying goes, knowledge is power, even when it comes to finances.

The Conversation

Catherine Montalto, Associate Professor of Consumer Sciences, The Ohio State University and Anne McDaniel, Senior Associate Director, The Ohio State University

This article was originally published on The Conversation. Read the original article.

We Need More Diversity in Information Technology in Higher Education

The demand for information technology workers is growing, and the available supply isn’t keeping pace. With the retirement of the baby boomer generation in full swing, worker shortages are only going to be more apparent in the year to come. And, if the information technology field can’t attract a more diverse population, the field is going to suffer.

Tech companies are generally not known for diversity. The IT workforce is predominantly white or Asian males. Even though many companies announce diversity initiatives on a regular basis, they can only hire from the worker pool that is available. And that pool is created based on their choices in higher education.

Minorities in Computer Science

In 2013, a study showed that out of all of the bachelor’s degrees awarded through 179 prestigious universities, 4.5 percent were awarded to black students and 6.5 percent went to Hispanic students. However, the US Census Bureau showed that the population was 12.6 percent black and 16.3 percent Hispanic as of 2010.

This suggests that minority students aren’t being attracted into the computer science field. Couple this with the low number of black and Hispanic students participating in the AP Computer Science exam in high school, and that fact becomes more apparent.

Since black and Hispanic students are underrepresented in technology-oriented education, they will be underrepresented in the information technology workforce as well. And this issue is compounded by negative impressions many minorities have regarding the culture at many tech companies.

Minority Hiring at Tech Companies

The hiring numbers from tech giants like Google, Apple, and Facebook do not paint a welcoming picture to minority students interested in technical fields. Often, this negative impression leads minorities to seek work in lower-paying positions outside of traditional tech companies. In fact, many default to office or administrative positions even though their degrees indicate greater potential.

So how can additional diversity in information technology higher education help overcome low levels of hiring? By creating a larger qualified candidate pool of minority students. If there are more available, then companies may be moved to hire more minority applicants.

Improving Diversity

Part of what encourages students to pursue specific degrees is a sense of belonging. Often, this involves finding a role model with traits similar to their own as a source of inspiration to move forward. In technology, the population working in the field lacks diversity, making it more challenging for students to find suitable role models from which to choose.

In some cases, the lack of diversity is apparent even sooner. For example, university, college, and high school teachers are seen as representatives of the field for many students. If there is no diversity in hiring for these teaching positions, minority students may be less inclined to picture themselves pursuing these fields even if they otherwise have an interest in the work.

If educational institutions hire with an unconscious bias, similar to what may exist in the technology community as a whole, then they are likely to choose instructors based on preconceived notions instead of purely on capability. By breaking the cycle, and introducing computer science and technology students to a diverse group of educators throughout their schooling, the amount of diversity in the field as a whole can increase. And, once diversity is seen as the norm, that will support a cycle of diversity and inclusion instead of what we see today.

The importance of play: what universities can learn from preschools

Nicola Whitton, Manchester Metropolitan University

Almost as soon as they begin school, children start getting tested. With the introduction of tests for four-year-olds and the explicit link between test results and school performance, education policies of successive governments have led to an increased emphasis on results at all levels of schooling.

This focus has led to a stigmatisation of failure, even though it is fundamental to the learning process from preschool all the way to university.

This ill-prepares learners for real life, which does not provide set answers to problems with neat scores to gauge progress. The real world is messy and diverse, and young people need to be creative, resourceful and resilient to succeed in it. One of the best ways to achieve this is through play.

The best kind of learning is “intrinsically motivated”, where students want to learn because it is interesting, purposeful and personally relevant, not because it is assessed. Learning takes place through action, failure, reflection, and practice. But while making mistakes is an inevitable part of this process, our school system fails to recognise this.

Exam grades are often seen as more important than fostering a love of learning – and as a result schools are overlooking the value of learning that does not fit into a specified curriculum.

When students reach university, most have learned that grades (and their impact on job opportunities) are of prime importance. For many, the magic of learning out of interest and passion has been eclipsed. The introduction of tuition fees has only increased the expectation that the role of university is to provide qualifications rather than focus on the intrinsic value of education.

This shift in expectation is hardly surprising given that students have to consider their personal investments and the returns they are likely to receive. This makes perfect sense for an individual student, but does not take into account what is best for society, which needs people to be creative and take risks, not simply focus on scoring highly in a test.

The need to fail

While many students fail university modules and drop out of courses, this is often seen as a last resort and universities are becoming increasingly averse to failing their students. A focus on one-shot assessments does not give students opportunities to fail regularly on a less catastrophic level.

The ability to manage failure, both emotionally and practically, increases the ability to manage risk. It is only by taking risks that we can explore new possibilities and ways of thinking. We are in danger of creating a generation of risk-averse students. The possibility of failure can also actually increase a person’s intrinsic motivation: if success is certain, there is little challenge and so little motivation.

One way to develop a generation who can take risks is through playful learning. Play supports socialisation and decreases stress, develops imagination and creativity, enables learners to have new experiences, and learn from their mistakes.

While it is integral to early years education, a focus on assessment has all but driven play out of schools. The relative flexibility of higher education curricula and teaching approaches provide opportunities to give learners chances to play, experiment, experience, and fail – and, most importantly, learn from those failures.

Make it worth their while. JHershPhoto/www.shutterstock.com

Playtime at university

Several UK universities are already embracing elements of playful learning. For example, the University of Portsmouth uses “pervasive learning” activities, where courses are taught through playful, detailed simulations in which students work together to solve problems and make mistakes away from the real consequences of assessment.

The Great History Conundrum at the University of Leicester, which runs every year for first-year students, uses an online puzzle-solving card game to teach critical historical literacy. Students play as long as they like to collect enough points to pass the course: if they fail on one puzzle they can move on to the next.

Students at Manchester Metropolitan University play the Staying the Course game during induction to highlight the range of university support available. The University of Brighton has also used alternate reality games during induction, which allow students to work together to solve online and physical puzzles, and large-scale multi-player quizzes to engage new students and orientate them to university life in novel ways.

These kind of approaches do not work in every context, and will inevitably meet resistance from some students and academics. We have to make the case that far from trivialising education, playful learning makes it richer, more purposeful, and more useful for life after education.

Playful learning is not an easy option. It is more academically challenging, making students less reliant on rote learning and established ideas. To embrace playful learning, we need to create more opportunities for students to fail safely and focus on the development of intrinsic motivation, passion and curiosity. Crucially, we must radically rethink how, and why, we assess our students.

The ConversationNicola Whitton, Professor in Education, Manchester Metropolitan University

This article was originally published on The Conversation. Read the original article.

Teaching the next generation of cybersecurity professionals

Nasir Memon, New York University

Each morning seems to bring new reports of hacks, privacy breaches, threats to national defense or our critical infrastructure and even shutdowns of hospitals. As the attacks become more sophisticated and more frequently perpetrated by nation-states and criminal syndicates, the shortage of defenders only grows more serious: By 2020, the cybersecurity industry will need 1.5 million more workers than will be qualified for jobs.

In 2003, I founded Cyber Security Awareness Week (CSAW) with a group of students, with the simple goal of attracting more engineering students to our cybersecurity lab. We designed competitions allowing students to participate in real-world situations that tested both their knowledge and their ability to improvise and design new solutions for security problems. In the past decade-plus, our effort has enjoyed growing interest from educators, students, companies and governments, and shows a way to closing the coming cybersecurity workforce shortage.

Today, with as many as 20,000 students from around the globe participating, CSAW is the largest student-run cybersecurity event in the world. Recruiters from the U.S. Department of Homeland Security and many large corporations observe and judge each competition. (Registration for this year’s competition is still open for a little while.)

But the pipeline for cybersecurity talent cannot begin in universities. High school students and teachers also participate in CSAW events to teach young people the computer science and mathematics skills that will allow them to succeed at the university level.

Teaching students to be adversarial

Thousands of students join together to learn about cybersecurity. CSAW, CC BY-ND

The main draw of CSAW is our Capture the Flag event, a contest in which the team members must pool their skills to learn new hacking methods in a series of real-world scenarios. Named after the outdoor game where two teams play to find and steal the enemy’s hidden flag, it includes multiple games that cover a broad range of information security skills, such as cryptography (code-making and breaking), steganography (hiding messages in innocent-looking images or videos) and mobile security.

Teams start by being assigned systems that have security flaws, and are given a certain amount of time to identify and fix them. Then each team is set against an opponent, and must protect its own system while attacking the other team’s. The hidden “flags” are data files stored on the opposing system. In the real world, these would contain critical information – such as credit card numbers or codes for controlling weapons. In the game, they contain information that proves a team “captured” that “flag,” with which the team is awarded a certain number of points, based on how difficult that particular challenge was.

There are many Capture the Flag competitions held throughout the country, which helps make our event the most popular of the week’s six competitions. It is also the most grueling: Teams must work for 36 hours straight, testing each participant’s ability to stay focused enough to create new solutions to emerging problems.

This type of challenge-based learning is vital in a field in which new threats emerge regularly. It also instills in students an adversarial mindset, which is an essential quality for successful security professionals. Learning the different ways to break a system firsthand is a vital first step to learning how to secure it.

Adapting on the fly

In one CSAW competition, the Embedded Security Challenge, students break into teams that must be able to work quickly at both attacking and defending each other from various threats. This is an attack/defense game like Capture the Flag, but focuses on vulnerabilities in hardware, rather than software. Last year, competitors were tasked with altering the digital results of a mock election – exposing potentially real threats to everyday elections.

This ability to quickly adapt as new threats are perceived is a top priority for security personnel. That’s a key element of all CSAW competitions – the idea that successful cybersecurity is not limited to mastering what’s known. Rather, students and professionals alike must constantly push their abilities to intercept future threats in an ever-evolving field. The cybersecurity industry – and all operations that rely on it, from small businesses to major military installations – depend on its practitioners’ ability to innovate. Every year, we change the types of challenges to reflect new threats, such as the recent rise of ransomware, for example.

Cybersecurity efforts must extend well beyond national borders; this year CSAW will dramatically increase its international activities. A collaboration with NYU Abu Dhabi and the Indian Institute of Technology Kanpur will allow teams in the Middle East, India, North Africa and the United States to compete simultaneously. The competitors in these games in an educational setting, in the U.S. and around the world, will – not long from now – be the protectors of our most sensitive personal and national data. We need them to be prepared.

The ConversationNasir Memon, Professor of Computer Science and Engineering, New York University

This article was originally published on The Conversation. Read the original article.

Where are new college grads going to find jobs?

Michael Betz, The Ohio State University

College graduates of the new millennium are different than previous generations. Not just because they prefer Snapchat to email and have mountains of school loans, but also because of their choices of where to live.

In the past, several factors such as the proportion of a city’s workers who are college educated, job prospects, income levels, and city amenities have influenced college graduates’ decisions on where to live.

So, are college graduates looking for the same things as they did in the earlier decades? Or has a changing world affected what attracts educated workers?

These choices matter to city officials, who work hard to attract the next generation of skilled workers through the lure of exciting new entertainment districts or by becoming a hub of a trendy new industry.

As a regional economics researcher, studying the migration of highly skilled workers, I found that there have been changes in how college graduates choose where they go to work, which has implications for the health of the national economy.

Grads in the 1990s

Previous research found that in the 1990s – after controlling for lots of other city characteristics like population, income and amenities – the proportion of the population with a college degree increased more in cities that already had lots of college grads.

Boulder, Colorado Let Ideas Compete, CC BY-NC-ND

For example, small college towns like Boulder and Ann Arbor had some of the country’s highest proportions of populations holding a bachelor’s degree – around 42 percent of the population at the start of the 1990s.

These towns ranked first and eleventh in the population of college graduates. Over the next decade, they added 10.2 and 6.2 percentage points to their respective totals.

There are many reasons why highly educated workers might be attracted to towns with a high population of college graduates.

Art fair in Ann Arbor, Michigan. Michigan Municipal League, CC BY-ND

One, there is evidence that productivity from better educated workers can “spill over” to other workers, enabling them to earn higher wages. Two, better educated places tend to offer more cultural amenities and entertainment options, which college graduates value highly.

If this trend goes on long enough, it could lead to segregation according to education level across cities, as cities with better educated workers will continue to attract more educated workers. In fact, one prior study suggests this is exactly what happened in the three decades between 1970 to 2000.

We were interested in seeing whether these trends still held, or if different factors were now luring graduates. So what is this generation of graduates looking for?

Large cities attract grads

A lot has changed for colleges graduates of the new millennium.

After all, in the last decade there have been two recessions (including the Great Recession in 2008), a continued decline in overall interstate migration, and significant industry restructuring from the loss of so many manufacturing jobs. Other studies have highlighted the trend of college-age adults living with their parents longer, providing some evidence that the struggling economy was swaying behaviors.

We wanted to see if the same economic forces had also changed where college graduates were moving.

We used data from the U.S. Census on the share of population who are college graduates in 358 Metropolitan Statistical Areas in the United States for two separate decades: 1990-2000 and 2000-2010. We controlled for many other city characteristics like size, median income and natural amenities.

We found it was not necessarily the better educated cities that attracted graduates as in the 1990s, rather it was the large cities (holding city education levels constant) that attracted graduates post-2000.

Bigger cities are attracting more graduates. Aurelien Guichard, CC BY-SA

Bigger cities usually have more diversified economies and therefore pose less risk of unemployment. It could be that graduates preferred places where they would increase their odds of landing any job during a time when the overall economy was struggling.

In previous decades graduates might have risked failure by chasing down that dream job in a city that was less diversified but more risky. Take Des Moines, Iowa, for example. It is a city of about 200,000 people that is highly specialized in the insurance industry. The percent of the population who were college grads increased six percent in the 1990s, but had slowed to less than four percent post-2000.

We believe, that because economy was so strong in the 1990s, graduates were willing to take those risks. So they went to cities with industries that employed highly educated people and were growing fast, but which also were more risky. If things didn’t work out, they believed they could always find a decent job somewhere else.

In the first decade post-2000, much of that perceived job security had eroded.

What can policymakers do

Our research is important because it further refines models of regional college graduate growth and updates our understanding of how the mix of factors influencing city college graduate growth has changed in the new millennium.

The link between education and economic growth is well-established and intuitive. Better educated workers are more productive in their jobs and more likely to start new businesses that create even more jobs. College graduates are more productive workers, have higher civic engagement, and help create new businesses.

Because of these things, our results have important policy implications for decision-makers aiming to lure talented workers to their cities. While city officials can do little to make their cities quickly grow in population, they can enact policies such as investing in education and supporting local entrepreneurs that create more stable labor markets.

The Conversation

Michael Betz, Assistant Professor of Human Development and Family Science, The Ohio State University

This article was originally published on The Conversation. Read the original article.

Is it time to eliminate tenure for professors?

Samantha Bernstein, University of Southern California and Adrianna Kezar, University of Southern California

The State College of Florida recently scrapped tenure for incoming faculty. New professors at this public university will be hired on the basis of annual contracts that the school can decline to renew at any time.

The decision has been highly controversial. But this is not the first time tenure has come under attack. In 2015, Wisconsin Governor Scott Walker called for a reevaluation of state laws on tenure and shared governance. As of March 2016, a new policy at the University of Wisconsin has made faculty vulnerable to lay offs.

The tenure system provides lifetime guarantees of employment for faculty members. The purpose is to protect academic freedom – a fundamental value in higher education that allows scholars to explore controversial topics in their research and teaching without fear of being fired.

It also ensures that faculty can voice their opinions with university administration and ensure that academic values are protected, particularly from the increasingly corporate ideals invading higher education institutions.

Our research on the changing profile of university faculty shows that while the university enterprise has transformed dramatically in the last hundred years, the tenure employment model remains largely unchanged. So, has the tenure model become outdated? And if so, is it time to eliminate it altogether?

Growth of adjunct faculty

The demographic of higher education faculty has changed a lot in recent years. To start with, there are very few tenured faculty members left within higher education.

Tenure-track refers to that class of professors who are hired specifically to pursue tenure, based largely on their potential for producing research. Only 30 percent of faculty are now on the tenure-track, while 70 percent of faculty are “contingent”. Contingent faculty are often referred to as “adjuncts” or “non-tenure track faculty.” They are usually hired with the understanding that tenure is not in their future at that particular university, and they teach either part-time or full-time on a semester-to-semester or yearly basis.

There are fewer tenured faculty in the higher education system. St. Ambrose University, CC BY-NC

Most contingent faculty have short-term contracts which may or may not be renewed at the end of the contract term. As of 2010, 52 percent of contingent faculty had semester-to-semester part-time appointments and 18 percent had full-time yearly appointments.

Researchers suggest that the increase in contingent appointments is a result of the tenure model’s failure to adapt with the significant and rapid changes that have occurred in colleges and universities over the last 50 years.

The most significant of these changes is the rise of teaching-focused institutions, the largest growth being in the community college, technical college and urban institutions that have a primary mission to educate students with little or no research mission. Between 1952 and 1972 the number of community colleges in the United States nearly doubled, from 594 to 1141, to accommodate a large increase in student enrollments, leaving four-year institutions to focus on research and development.

Campuses changed, not tenure system

Most commentators have described the growth of contingent faculty as a response to financial pressures in the 1990s.

But our research shows that this growth actually began in the 1970s when market fluctuations caused unexpected growths in college enrollment. Between 1945 and 1975, college enrollment increased in the United States by 500 percent. However, rising costs and a recession in the late 1970’s forced administrators to seek out part-time faculty to work for lower wages in order to accommodate these students. The practice increased dramatically thereafter.

In addition to enrollment changes, government funding for higher education decreased in the late 1980s and ‘90s. The demand for new courses and programs was uncertain, and so campuses needed more flexibility in faculty hiring.

Further, over the last 20 years new technologies have created new learning environments and opportunities to teach online.

Tenure-track faculty incentivized to conduct research were typically not interested in investing time to learn about new teaching technologies. Consequently, a strong demand for online teaching pushed institutions into hiring contingent faculty to fill these roles.

As a result, what we have today is a disparity between the existing incentive structures that reward research-oriented, tenure-track faculty and the increased demand for good teaching.

Why the contingent faculty model hurts

Critics of tenure argue that the tenure model, with its research-based incentives, does little to improve student outcomes. But the same can be said of the new teaching model that relies so heavily on contingent faculty – it is not necessarily designed to support student learning.

Research on contingent faculty employment models illustrates that they are poorly designed and lack many of the support systems needed to foster positive faculty performance.

For example, unlike tenure-track faculty, contingent faculty have little or no involvement in curriculum planning or university governance, little or no access to professional development, mentoring, orientations, evaluation, campus resources or administrative support; and they are often unaware of institutional goals and outcomes.

Furthermore, students have limited access to or interaction with these faculty members, which research suggests is one of the most significant factors impacting student outcomes such as learning, retention and graduation.

Studies have shown that student-faculty interaction provides students with access to resources, mentoring and encouragement, and allows them to better engage with subject material.

Studies show lower graduation rates as a result of the faculty workforce model. Sakeeb Sabakka, CC BY

Recent research on contingent faculty has also identified some consistent and disturbing trends related to student outcomes that illustrate problems related to new faculty workforce models. These include poor performance and lower graduation rates for students who take more courses with contingent faculty, and lower transfer rates from two-year to four-year institutions.

Using transcripts, faculty employment and institutional data from California’s 107 community colleges, researchers Audrey Jaeger and Kevin Eagan found that for every 10 percent increase in students’ exposure to part-time faculty instruction, they became 2 percent less likely to transfer from two-year to four-year institutions, and 1 percent less likely to graduate.

Additionally, studies of contingent faculties’ instructional practices suggest that they tend to use fewer active learning, student-centered teaching approaches. They are also less engaged with new and culturally-sensitive teaching approaches (strategies encouraging acknowledgment of student differences in a way that promotes equity and respect).

Consequently today, when the pool of Ph.D. students is growing, the number of tenure-track positions available for graduates is shrinking. As a result, a disconnect has evolved between the types and number of Ph.D.s on the job market in search of tenure, and the needs of, and jobs available within, colleges and universities.

Some estimates show that recent graduates have less than a 50 percent chance of obtaining a tenure-track position. Furthermore, it is graduates from the top-ranked quarter of graduate schools who make up more than three quarters of tenure-track faculty in the United States and Canada, specifically in the fields of computer science, business and history.

A new tenure system?

We appear to be at a crossroads. The higher education enterprise has changed, but the traditional tenure model has stayed the same. The truth is that universities need faculty who are dedicated to teaching, but the most persuasive argument in support of tenure – its role in protecting academic freedom– has come to be too narrowly associated with research.

Academic freedom was always meant to extend to the classroom – to allow faculty to teach freely, in line with the search for truth, no matter how controversial the subject matter. Eliminating tenure completely will do little to protect academic values or improve student performance.

Instead, the most promising proposal that has emerged many times over the last 30 years is to rethink the traditional tenure system in a way that would incentivize excellent teaching, and create teaching-intensive tenure-track positions.

Under an incentive system, when considering whether to grant tenure, committees can take into account excellence in teaching, by way of student evaluations, peer review, or teaching awards. For faculty on a teaching-intensive track, tenure decisions would be made based primarily on their teaching, with little or no weight given to research.

Though not every contingent faculty member would be eligible for such positions, these alternative models can change the incentive structures inherent in the academic profession. They may be able to remove the negative stigmas surrounding teaching in the academy and may eliminate the class-based distinctions between research and teaching faculty that have resulted from the traditional tenure model.

The Conversation

Samantha Bernstein, PhD Student, University of Southern California and Adrianna Kezar, Professor of Higher Edcuation, University of Southern California

This article was originally published on The Conversation. Read the original article.

Eliminating inequalities needs affirmative action

Richard J. Reddick, University of Texas at Austin; Stacy Hawkins, Rutgers University, and Stella M Flores, New York University

The Supreme Court has upheld the affirmative action admission policy of University of Texas. Abigail Fisher, a white woman, applied to the University of Texas at Austin (UT Austin) in 2008. She sued the university after she was denied admission on the grounds that the university’s race-conscious admissions policy, violated the equal protection clause of the Fourteenth Amendment.

On Thursday, June 23, the Supreme Court ruled that the race-conscious admissions program was constitutional – a decision that the three scholars on our panel welcome. They tell us why existing educational inequalities need considerations of race and ethnicity in admissions.

How else do you eliminate inequality?

Richard J. Reddick is an associate professor in educational administration at University of Texas at Austin.

UT Austin’s history on legal decisions about race in higher education goes back to Sweatt v. Painter (1950), a case that successfully challenged the “separate but equal” doctrine articulated in Plessy v. Ferguson (1898). The landmark case helped pave the way for Brown v. Board of Education (1954), which outlawed racial segregation in education.

The next test, in the Hopwood v. Texas (1996) case, came from the other direction. Cheryl Hopwood was a white applicant who was denied admission. She challenged UT Austin’s use of race in its admissions decisions as unconstitutional. The Fifth Federal Circuit Court of Appeals eliminated the consideration of affirmative action in universities and colleges in Texas. This decision was overruled in 2003.

Fisher, then, was another challenge to the university’s renewed efforts to provide educational opportunity and access to underrepresented students at predominantly white institutions.

UT Austin’s history on legal decisions about race in higher education goes back to Sweatt v. Painter. qmechanic, CC BY-NC-SA

Opponents of affirmative action often argue that metrics, such as test scores and class rank, that appear to be neutral, should be the method by which to admit students.

These arguments fail to consider the real impact that racial and socioeconomic discrimination has on educational opportunity. School resources and teacher quality differ significantly, and intangibles such as leadership opportunities often depend on subjective criteria such as teacher recommendations.

Furthermore, many students from underrepresented communities confront challenges in navigating school systems. We additionally know that standardized testing can show bias in certain populations.

In other words, these “neutral” measures actually reinforce social inequities.

The most selective institutions of higher education in the nation no longer rely solely on these metrics. They seek out students with a variety of experiences – factors that may not always correspond to test scores and class ranking.

Today’s ruling is a reassurance, as fleeting as it might be, that the massive task of eliminating educational inequality – which correlates to many other forms of inequality – can be supplemented by approaches in college admissions that consider race and ethnicity.

It does not minimize the importance of eradicating racial discrimination in all walks of life: in the words of UT Austin president Greg Fenves, “race continues to matter in American life.”

However, emphasizing the significance of careful, narrowly tailored approaches to enhancing diversity at predominantly white institutions is a victory for the scholars, researchers, administrators and families who have demonstrated how diversity provides significant educational benefits for all students and American society.

What are the implications for other colleges?

Stacy Hawkins is an associate professor at Rutgers University, where she teaches courses in Employment Law and Diversity in the Law.

The Supreme Court’s decision is cause for both celebration and circumspection.

Justice Anthony Kennedy, the court’s moderate swing justice, whose opinion was rightly predicted to be the key to the decision, undoubtedly shocked many by voting for the first time to uphold a race-conscious admissions policy.

However, the decision is more consistent with Justice Kennedy’s prior decisions, notwithstanding the difference in outcome, than might appear at first blush.

On the one hand, Justice Kennedy reaffirmed his commitment to diversity as a compelling educational interest in 21st-century America (a view he expressed in prior cases on diversity in higher education, as well as in primary and secondary schools).

On the other hand, however, Justice Kennedy also reaffirmed his long-standing belief that, notwithstanding this interest, race may play no more a role than is absolutely necessary to achieve the educational benefits of diversity.

In striking this delicate balance, Justice Kennedy sanctioned the University of Texas’ race-conscious admissions policy today, but gave fair warning that the future of this policy is by no means secure.

More important perhaps than the implications of this decision for the University of Texas is what, if any, implications this decision may have for other colleges and universities?

As Justice Kennedy acknowledged, the University of Texas is unique in its use of race to narrowly supplement a plan that admits the overwhelming majority of students (at least 75 percent) on the sole basis of high school class rank without regard to race, a feature that was critical to Justice Kennedy’s approval of the policy.

Thus, the vast majority of colleges and universities may still be left to wonder about the constitutionality of their own race-conscious admissions policies that operate more widely than Texas’ does.

With a similar case against Harvard University currently winding its way through the federal courts, the answer may not be far off.

Affirmative action bans exist in many states

Stella M. Flores is an associate professor of higher education at the Steinhardt School of Culture, Education, and Human Development at New York University.

Demography, economy and diversity are key issues facing the nation’s colleges and universities and should also be a part of their policy design.

In Fisher v. Texas today, Justice Kennedy’s opinion clearly states two outcomes. The first is that the university’s deliberation that race-neutral programs had not achieved their goals was supported by significant statistical and anecdotal evidence.

Admissions policies at universities play a key role in diversifying key areas. Supreme Court image via www.shutterstock.com

The second is that universities have the obligation to periodically reassess their admissions programming using data to ensure that a plan is narrowly tailored so that race plays no greater role than is necessary to meet its compelling interests. This is in essence an accountability mechanism for universities to follow using data and research.

Admissions policies at universities play an important role in the ability to diversify key fields relevant to the nation’s economy, including law, medicine, STEM, education and public policy, so that they can appropriately reflect and serve the unprecedented demographic expansion facing our country.

The decision ensures that pathways to the nation’s most critical educational and employment fields will stay open.

But there are other considerations and realities that include the following. First, some of the nation’s most racially diverse states will still operate under affirmative bans due to state legislation and referenda. These include California, Florida, Michigan, Arizona and Oklahoma.

Second, there is still a clear need for additional effective policies and efforts beyond a consideration of race in college admissions to address the disconnect between the demographics of the nation and its public K-12 schools and who is represented at selective colleges and universities.

Retracting the use of race nationally would have been a step toward increasing racial and ethnic inequality in schools and society. But we’re in a time where race really matters in this country and in how we learn together as a diverse society in our classrooms. This decision reflects this reality.

The Conversation

Richard J. Reddick, Associate Professor in Educational Administration, University of Texas at Austin; Stacy Hawkins, Associate Professor, Rutgers University, and Stella M Flores, Associate Professor of Higher Education, New York University

This article was originally published on The Conversation. Read the original article.

Just graduated? Does it make you feel like a grown up?

Michael Vuolo, The Ohio State University and Jeylan T Mortimer, University of Minnesota

We may think that a simple age cutoff – such as 18 – should make us feel like adults. And why not? After all, crossing an age threshold can bestow certain rights, such as voting, military enlistment, purchase of certain substances as well as adult images or videos.

From our perspective as researchers who study the transition from adolescence to adulthood, these legally defined age markers are hardly a good indicator of when we feel like adults. They can be subject to change and have no universal or even national standard.

For example, the minimum age for purchase for alcohol and recreational marijuana is 21. But the purchase of recreational marijuana is not legally permitted in all states. While tobacco purchase age is typically 18, two states and several cities recently moved the tobacco purchase age up to 21.

In addition, often times, individuals may not always “feel” like an adult simply because they passed an age marker.

So, when do we “feel” like adults?

Path to adulthood

Our idea of “adult” is bound closely to both our objective attainment of certain roles as well as our subjective evaluation of the timing of those roles.

Scholars working in this area have identified five important role transitions marking adulthood: finishing school, leaving home, acquiring stable work, marrying and parenting.

Although each of these adult roles has been considered alone or in pairs, little is known about how people traverse all the roles simultaneously and how achieving these markers of adulthood affects considering one’s self an “adult.”

People may feel “on time” or “off time,” depending on whether they achieve adult roles at the “right time.“ In other words, feeling like an adult may be tied to achieving multiple roles marking adult life rather than any single one and doing so in a timely manner compared to peers.

Pathway to adult life? Elizabeth Donoghue, CC BY-NC-ND

A typical pathway was laid out in the early and mid-20th century: exit school, get a job, move out of the parental home, get married, and have children.

While this might be considered the “normal” pathway even today, these transitions do not occur in such a neat and predictable order for many contemporary young people. Furthermore, the time to complete them has become longer.

It is commonplace today for young people to return to school after beginning work, move back in with parents (or never leave), have children prior to marriage, or work in less secure part-time jobs.

Different transition paths

Given the myriad possible paths through these roles, our research seeks to find frequent patterns or commonalities in the ways roles marking adulthood are traversed from ages 17 to 30 and what they mean for considering oneself an adult.

The study is based on a sample of 1,010 freshmen of St. Paul Public Schools, a school district of Minnesota. The survey started in 1988 and continued near annually through 2011. Over 20 years, this study has examined the consequences of work and other formative experiences in adolescence for the transition to adulthood.

Using a method that could identify distinct patterns in the timing and sequencing of adult roles, we found that the traditional school-to-work transition followed by “family formation” (that is, getting married and having children – around age 25) described above still exists.

However, only about 17 percent of young people follow that path today. Rather, most youth take four other pathways to adulthood.

Two of those paths involve a traditional school-to-work transition in one’s early twenties. But they are different in when they choose to form a family: one group delayed forming a family until their late twenties (20 percent); another did not do so by age 30 (27 percent).

The two remaining paths were distinguished by their low likelihood of attending college and early marriage and kids. Each member of this group had children by age 22.

But even these two paths defined by early parenting differed from one another: One group of early parents married and acquired full-time work (15 percent). The other, however, had much lower chances of achieving those roles (20 percent).

In other words, there were several objective ways to traverse the transition to adulthood.

Marriage, parenthood are critical

The question remains, do the members of these groups feel like an adult when they reach their mid-twenties? Have they acquired an adult identity? Do they think they are on or off time in achieving the five markers of adulthood?

Given the social acceptance of the traditional pathway of school-work-marriage- kids, individuals following that were more inclined to view themselves “entirely” as an adult. They considered themselves “on time” with regards to marriage and financial independence, relative to their peers.

Early parents who married and acquired full-time work also felt entirely like adults, although they considered themselves “very early” in traversing those markers.

Truly feeling like an adult is tied to forming one’s own family. Kim Davies, CC BY-NC-ND

By contrast, the early parents who did not get married or acquired stable work, felt “very early” on parenthood, but “very late” on other markers like marriage, cohabitation, and financial independence.

The other two groups who took the traditional school-to-work transition but delayed or did not get married and had kids felt “not entirely” like an adult. They believed that they were “very late” on parenthood.

While they achieved several traditional markers of adulthood, including finishing school, getting a job, and moving out on their own, they still did not feel like adults without marriage and parenthood.

It would appear that truly feeling like one has become an adult is tied to forming one’s own family via marriage and parenthood.

When do we “feel” adult?

Our research shows that there are many pathways that young people take in transitioning to adulthood. Adulthood is a subjective process that no one marker appears to be able to define, though marriage and parenthood are particularly important.

Moving away from the more traditional school-to-work transition allows for a period of exploration, as young people figure out what they want to do in life. Acquiring markers of adulthood is associated with leaving behind deviant behavior, such as heavy partying and even theft, usually committed at younger ages. Furthermore, in ongoing research, we find that early parents without partners have poor objective and subjective health outcomes.

But, to come back to the original question, when do we “feel” like adults, there is no simple answer.

Individuals become adults when they feel like adults, but this feeling is tied to the timely acquisition of certain markers, especially marriage and parenthood. Such subjective assessments are socially constructed.

In time, as the four “non-traditional” pathways become more commonplace, perhaps what is perceived as “on time” adulthood will shift so that individuals following those paths will view themselves as adults earlier in life.

The Conversation

Michael Vuolo, Assistant Professor of Sociology, The Ohio State University and Jeylan T Mortimer, Professor of Sociology, University of Minnesota

This article was originally published on The Conversation. Read the original article.

The hefty price of ‘study drug’ misuse on college campuses

Lina Begdache, Binghamton University, State University of New York

Nonmedical use of Attention Deficit Hyperactivity Disorder (ADHD) drugs on college campuses, such as Adderall, Ritalin, Concerta and Vyvanse, has exploded in the past decade, with a parallel rise in depression disorders and binge drinking among young adults.

These ADHD drugs act as a brain stimulant that are normally prescribed to individuals who display symptoms of ADHD. These stimulants boost the availability of dopamine, a chemical responsible for transmitting signals between the nerve cells (neurons) of the brain.

But now a growing student population has been using them as “study” drugs – that help them stay up all night and concentrate. According to a 2007 National Institutes of Health (NIH) study, abuse of nonmedical prescription drugs among college students, such as ADHD meds, increased from 8.3 percent in 1996 to 14.6 percent in 2006.

Besides helping with concentration, dopamine is also associated with motivation and pleasurable feelings. Individuals who use these ADHD drugs nonmedically experience a surge in dopamine similar to that caused by illicit drugs which induces a great sense of well-being.

My journey with investigating the effect of the stimulant use nonmedically on college campuses started with a question from a student seven years ago. The question was about the long-term effect of misuse on brain and physical health. Having an educational background in cell and molecular biology with a concentration in neuroscience, I started a literature review and soon became an educator on the topic to teach students about the effects of such stimulant misuse on the maturing brain.

College students who take ADHD drugs without medical need could risk developing drug dependence as well as a host of mental ailments.

Substance abuse in college

College students have been reported to use many stimulants, including but not limited to Adderrall, Ritalin and Dexedrine.

According to the 2008 National Survey on Drug Use and Health, students who used Adderall for nonmedical purposes were three times more likely than those who had not used Adderall nonmedically to use marijuana. They were also eight times more likely to use cocaine. In addition, 90 percent of the students who used Adderall nonmedically were binge alcohol consumers.

College students use ADHD drugs as ‘study’ drugs. David A Ellis, CC BY

Generally, college students who abuse ADHD drugs are white, male and part of a fraternity or a sorority. Often they have a low GPA as well.

ADHD drugs appear harmless to many, as often they are prescribed by physicians, even though these drugs have a “Black Box warning,” which
appears on a prescription drug’s label to call attention to serious or life-threatening risks. Despite such a strict warning from the FDA, many practitioners end up prescribing them based on subjective reporting of symptoms of ADHD. The lack of a gold standard for ADHD diagnosis has, in fact, led to physicians overprescribing the drug.

Furthermore, students who get hold of these prescriptions can easily sell pills on the black market. Students who buy these pills illicitly miss seeing the warning about potential abuse, addiction and other side effects.

What’s more, a chewable form of an ADHD drug has been recently introduced in the market. These are fruity-flavored extended-release drugs that dissolve instantly in the mouth. They are targeted for children for a fast medicated response, but present a great potential for abuse.

The neurobiology of addiction

What are the consequences of taking these drugs without a medical condition?

The nonmedical use of the ADHD drugs (stimulants) is of great concern because it raises levels of dopamine the same way illicit drugs do. Therefore, abuse of these drugs may cause the same effect on addiction, brain rewiring and behavioral alteration.

While students may be aware of the harmful effects of “doing drugs,” the use of the ADHD drugs nonmedically may seem harmless because they are prescription medicine.

There is a limited body of knowledge on the effect of long-term nonmedical ADHD drug abuse on the developing brain. Of concern are potential permanent alterations taking place in the pathways of nerve cells of the maturing brain.

ADHD drugs could be addictive, if used without medical necessity. Since brain development continues into the mid-20s and the young brain is remarkably plastic, this sets up a risk of developing chronic substance abuse, addiction and mental ailments.

Nonmedical ADHD drugs, like any illegal drug, collectively activate a nerve pathway known as the “reward system of the brain.” This reward system is responsible for positive feelings such as motivation and pleasure. From an evolutionary point of view, the circuit controls an individual’s responses to motivation and pleasure (e.g., food and sex) which promote survival and fitness, respectively.

The response of the brain reward system to natural cues is highly regulated by a homeostatic mechanism – a process by which the body maintains its constant internal environment.

Individuals can ‘function’ only when the brain is on drugs. Steve Snodgrass, CC BY

However, a nonmedical ADHD drug, like an illegal drug, overactivates this “reward circuit,” thereby disturbing the brain’s internal balance. This causes the brain to maladapt (structurally and functionally) and turn the brain into being “substance-dependent.” These changes happen at the genetic level.

A consequence of this is that the brain starts to need an increased dosage of the drug to respond to the natural cues for motivation and life pleasures. This sets the stage for more substance abuse. The individual then reaches for higher doses and more potent substances. Eventually, a cycle of further dependence and drug abuse ensues.

Impact of abuse

The concern with the nonmedical ADHD drug abuse is that it might prime the brain for use of other substances such as alcohol, cocaine and marijuana (something that the national surveys mentioned above revealed).

Major behavioral changes emerge such as compulsive drug seeking, aggression, mood swings, psychosis, abnormal libido and suicidal thoughts.

In fact, there have been documented cases of college students who have taken their lives following an addiction to nonmedical ADHD drugs.

Animal studies show that the changes that lead to rewiring of the brain are due to an alteration in gene function. Some of these changes become permanent and heritable, especially with prolonged abuse, meaning that the altered (newly programmed) genes are passed down to offspring.

In fact, a body of evidence is linking the process of addiction (among many chronic diseases) to altered gene function profile passed down by ancestors. This altered profile could predispose their offspring to certain disorders.

Currently, prescription of ADHD drug is based mostly on subjective self-reported symptoms, and a gold standard for ADHD diagnosis remains to be perfected. As a lyric from the rock band Marilyn Manson says:

Whatever does not kill you, it’s gonna leave a scar.

That’s the case with nonprescription ADHD drug abuse.

The Conversation

Lina Begdache, Research Assistant Professor, Binghamton University, State University of New York

This article was originally published on The Conversation. Read the original article.

Are some students more at risk of assault on campuses?

Leah Daigle, Georgia State University

When students come to pursue their educational interests, they believe they are entering a safe environment. But while colleges are thought of as “ivory towers,” they can also be places where students could become victims of a crime.

In my research on victims of crime, I have found that particular types of students are more exposed to risks in a college environment. The risks are often tied to the party culture endemic on college campuses, where alcohol consumption is a major feature.

Who is on campus

Often these students choose to enroll in universities in the U.S. for a quality education or to be able to pursue the major and career path of their choice.

For almost all young people, this is the first time that they are away from home, responsible for themselves, without adult supervision and an abundance of unstructured time. Part of college culture involves spending time at parties and bars, recreational drug use and engaging in other risky behaviors (e.g., binge drinking, hooking up).

Alcohol consumption exposes students to risks.COD Newsroom, CC BY

Alcohol consumption becomes a major feature of such activities. Data indicate that about 65 percent of college students consume alcohol in a given month, and less than half of college students engage in binge drinking.

Research shows that such behaviors increase the likelihood of being a crime victim.

Drinking alcohol can increase the chances of being a crime victim because alcohol use impairs judgment and perception, decreases the ability to recognize and react to risk, impairs decision-making and delays reaction time.

Are all college students at risk?

Research shows about a third of college students could be victims of a crime during a given year. However, the risks could be different for different ethnic and racial groups on campus.

For example, there could be a higher risk for some groups such as non-Hispanic white men. This group faces the highest risk – most likely a result of participation in the party culture. White, male college students drink alcohol at greater levels and engage in more risky drinking than do female or African-American college students.

But there is a small percentage of international students who come to American campuses as well. In 2015, there were 1.13 million international college students enrolled in the U.S., with the largest percentage coming from China.

What is the risk international students face of being a victim?

International students face lower risks of assault. IFES – International Fellowship of Evangelical Students Follow, CC BY-NC

Our research explored this possibility, given that international students may have unique experiences before and while attending college in the U.S.

Our study used data from the Fall 2012 American College Health Association’s National College Health Assessment II. This study is a national survey of college students that is done in the fall and spring. Our study sample included 26,012 students, 8.6 percent of whom were international students.

We found that overall, when asked about their experiences from the previous 12 months, international students were less likely to be “violently victimized” – that is, physically assaulted and/or verbally threatened – compared to domestic students. A physical assault might include being hit, punched, kicked, bitten or even shot, while a verbal threat might be experienced when a person is told that he or she is going to get beaten up or is going to be shot.

Nineteen percent of domestic students in our study indicated that they had been physically assaulted or verbally threatened, compared to 17 percent of international students.

Female international students are safer?

Subsequently, we looked at differences in risk for male and female international college students. We found that male international students were less likely to be victims of a crime, and so were international female students.

Our study found 22 percent of male international students had been assaulted or threatened, compared with 26 percent of male domestic students. Fourteen percent of female international students had been assaulted or threatened, while 16 percent of female domestic students faced these experiences.

These differences may seem small, and in magnitude, they are. But, we used a large sample of over 26,000 students, which leads us to feel confident that our findings are unlikely to be a result of a problem with our sample. Also, when you consider how many students attend college, a two percent difference (such as what we found between female international students and female domestic students in their risk) could be tens or hundreds of thousands of students.

In an additional set of analyses, we included other factors that previous research has shown to be related to risk on campus, such as alcohol consumption and being a first-year college student. We found that female international students faced fewer risks than did female domestic students. In fact, female international students’ odds of being harmed were 14 percent lower than female domestic students.

And why might this be the case? We found that female international students tended to have a less risky profile than their domestic student counterparts – they binge-drank less, were less likely to use drugs, were less likely to be a first-year undergraduate and were less likely to have a disability.

While there are still some unanswered questions, we believe there must be something unique about how female international students experience college. It is possible that female international students may not be fully engaging in college life. It is possible they might be under increased levels of guardianship or they may experience culture conflict.

Colleges should work to ensure that international students are a thriving part of the campus community while ensuring that they remain safe. Colleges should also provide culturally sensitive victim responses to international students.

The Conversation

Leah Daigle, Associate Professor of Criminal Justice and Criminology, Georgia State University

This article was originally published on The Conversation. Read the original article.