| mutate {SparkR} | R Documentation |
Return a new SparkDataFrame with the specified columns added or replaced.
mutate(.data, ...) transform(`_data`, ...) ## S4 method for signature 'SparkDataFrame' mutate(.data, ...) ## S4 method for signature 'SparkDataFrame' transform(`_data`, ...)
.data |
A SparkDataFrame |
col |
a named argument of the form name = col |
A new SparkDataFrame with the new columns added or replaced.
Other SparkDataFrame functions: SparkDataFrame-class,
[[, agg,
arrange, as.data.frame,
attach, cache,
collect, colnames,
coltypes, columns,
count, dapply,
describe, dim,
distinct, dropDuplicates,
dropna, drop,
dtypes, except,
explain, filter,
first, group_by,
head, histogram,
insertInto, intersect,
isLocal, join,
limit, merge,
ncol, persist,
printSchema,
registerTempTable, rename,
repartition, sample,
saveAsTable, selectExpr,
select, showDF,
show, str,
take, unionAll,
unpersist, withColumn,
write.df, write.jdbc,
write.json, write.parquet,
write.text
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D newDF <- mutate(df, newCol = df$col1 * 5, newCol2 = df$col1 * 2)
##D names(newDF) # Will contain newCol, newCol2
##D newDF2 <- transform(df, newCol = df$col1 / 5, newCol2 = df$col1 * 2)
##D
##D df <- createDataFrame(sqlContext,
##D list(list("Andy", 30L), list("Justin", 19L)), c("name", "age"))
##D # Replace the "age" column
##D df1 <- mutate(df, age = df$age + 1L)
## End(Not run)